41

Introduction

Many "C-like" programming languages use compound statements (code blocks specified with "{" and "}") to define a variables scope.

Here is a simple example.

for (int i = 0; i < 100; ++i) {
    int value = function(i); // Here 'value' is local to this block
    printf("function(%d) == %d\n", i, value);
}

This is good because it limits the scope of the value to where it is used. It is hard for programmers to use value in ways that they are not meant to because they can only access it from within its scope.

I am almost all of you are aware of this and agree that it is good practice to declare variables in the block they are used to limit their scope.

But even though it is an established convention to declare variables in their smallest possible scope it is not very common to use an naked compound statement (that is an compound statement that is not connected to an if, for, while statement).

Swapping the values of two variables

Programmers often write the code like this:

int x = ???
int y = ???

// Swap `x` and `y`
int tmp = x;
x = y;
y = tmp;

Would it not be better to write the code like this:

int x = ???
int y = ???

// Swap `x` and `y`
{
    int tmp = x;
    x = y;
    y = tmp;
}

It looks quite ugly but I think that this is a good way to enforce variable locality and make the code safer to use.

This does not only apply to temporaries

I often see similar patterns where a variable is used once in a function

Object function(ParameterType arg) {
    Object obj = new Object(obj);
    File file = File.open("output.txt", "w+");
    file.write(obj.toString());

    // `obj` is used more here but `file` is never used again.
    ...
}

Why don't we write it like this?

RET_TYPE function(PARAM_TYPE arg) {
    Object obj = new Object(obj);
    {
       File file = File.open("output.txt", "w+");
       file.write(obj.toString());
    }

    // `obj` is used more here but `file` is never used again.
    ...
}

Summary of question

It is hard to come up with good examples. I am sure that there are betters ways to write the code in my examples but that is not what this question is about.

My question is why we do not use "naked" compound statements more to limit the scope of variables.

What do you think about using a compound statement like this

{
    int tmp = x;
    x = y;
    y = z;
}

to limit the scope of tmp?

Is it good practice? Is it bad practice? Explain your thoughts.

wefwefa3
  • 997
  • 3
  • 10
  • 21
  • 8
    Why would you do this over creating a named function? IE in your swap example: `x` and `y` would be declared outside, then passed into a function called `swap`, and in `swap` would be a variable called `temp`. – David says Reinstate Monica Nov 10 '15 at 14:44
  • 1
    @DavidGrinberg I thought right away of `std::swap()`. – BЈовић Nov 10 '15 at 16:08
  • 5
    I think it is unnecessarily verbose in short functions (that have small scopes anyway), and if you want to use it to structure large functions, well, it might be acceptable but the actual problem is the large function itself. – Bergi Nov 10 '15 at 16:09
  • 2
    Actually, I do use this technique, on a fairly routine basis, for precisely the reasons you state. – John R. Strohm Nov 10 '15 at 16:09
  • 1
    In .NET you also have using blocks. In the file example you should used using or some other format to close / dispose the file. In .NET any object that implements dispose I typically put in a using block. – paparazzo Nov 10 '15 at 16:12
  • @Frisbee a using block has a very specific purpose and has constraints on it's use... It's not just a drop-in statement block. – Jimmy Hoffa Nov 10 '15 at 19:27
  • @JimmyHoffa Yes - I did state also. One of those constraints in .NET is implement dispose as indicated in my comment. – paparazzo Nov 10 '15 at 19:31
  • I do it. . . . . . . . – John Nov 11 '15 at 02:23
  • @DavidGrinberg: Awkward in languages that don't have pass-by-reference. – Nate Eldredge Nov 11 '15 at 05:38
  • 1
    Using blocks like you suggested is a good way to start refactoring an overly large method into smaller methods with single responsibilities. You'll often find that a block that uses variables locally only also has a single responsibility and can be turned into a method. – Erwin Bolwidt Nov 11 '15 at 06:05

9 Answers9

73

It is indeed a good practice to keep your variable's scope small. However, introducing anonymous blocks into large methods only solves half the problem: the scope of the variables shrinks, but the method (slightly) grows!

The solution is obvious: what you wanted to do in an anonymous block, you should be doing in a method. The method gets its own block and its own scope automatically, and with a meaningful name you get better documentation out of it, too.

Kilian Foth
  • 107,706
  • 45
  • 295
  • 310
  • + you get scope *isolation*, with a function the signature explicates the dependencies from the parent scope and handles them by value rather than by reference which adds greater isolation and protection between the scopes. – Jimmy Hoffa Nov 10 '15 at 14:36
  • 7
    This might provoke method bloat however. I prefer large methods over large classes. – Bergi Nov 10 '15 at 16:11
  • 12
    @Bergi, you shouldn't have either. Lots of small classes, each with a single responsibility achieved via a small number of small methods is the winning combination. – David Arno Nov 10 '15 at 16:19
  • 7
    @DavidArno: Of course that's the goal, but it might not be (easily) possible. In such cases I prefer long methods over bloated classes. – Bergi Nov 10 '15 at 16:22
  • 7
    You shouldn't move your code into methods but functions instead if possible. Don't use C++ as if it were Java. – Joe Nov 10 '15 at 16:24
  • 2
    @Joe isn't the question language-independent. OP only mentions "C-like", every language that can have anonymous code blocks defining variable scope falls under this question. – Mindwin Remember Monica Nov 10 '15 at 17:31
  • @Mindwin ah true. I assumed it was about c++ as it has `new` but no `public static`. Some C-like languages (for example c itself) don't even have methods though. – Joe Nov 10 '15 at 17:38
  • @Joe the discussion in this comment thread was drifting too much into OOP. – Mindwin Remember Monica Nov 10 '15 at 17:41
  • @Mindwin the answer only mentions methods and not functions, making it OOP-specific – Joe Nov 10 '15 at 17:43
  • @Joe have to disagree. It is not tagged OOP, and it begins with "C-like". The fact is that a question tagged [carnivors] that mentions dogs does not immediately excludes felines or make it a dog question. Also the word `method` does not appear on the question body, just `function`. So it does not mention methods. – Mindwin Remember Monica Nov 10 '15 at 17:59
  • 1
    @DavidArno - it depends on your goal. If ease of understanding is a goal then lots of small classes is absolutely not the way to go. Having had to lead teams in the past when transitioning them from non-OO to OO development I can make that statement with 100% certainty of its truth. It just so happens that I have a list of desirable attributes in software. #1 - It works. #2 - It is easy to understand, are the top 2 choices. So while SRP may be an often preached mantra, that mantra is purely a naive view that at best is just a guideline. Certainly not worthy of using the word "Principle". – Dunk Nov 10 '15 at 19:29
  • @Dunk, I used to think that way too. The problem is, if you treat "it works" as the top priority, you head off down a blind alley. I now realise that the top two attributes of code are 1, it's easy to understand and 2, it's easy to test (and by implication, is well tested via automated unit and integration tests). When well tested, easy to understand code breaks, it's easy to fix. When code that has "make it work" as it's priority breaks, it is often very hard to fix. The SRP is a key component of easy to test and understand code. – David Arno Nov 10 '15 at 20:06
  • 1
    @DavidArno - Having code that is easy to understand and is well tested doesn't do me much good if it doesn't work the way I need it to work. Thus, "It works" can never be removed from the #1 spot. But I totally understand how "it works" can be abused because that is the main reason used by people who just hack stuff together. Unfortunately, despite a desire to not have "it works" as #1, there is no other option. An ugly ball of mud that does what it is supposed to do is better than a beautifully architected, well-designed and easy-to-test masterpiece that can't do what it needs to do. – Dunk Nov 10 '15 at 20:34
  • 4
    @Bergi: Small methods do not bloat the class's API if they are private. They only bloat the file size... which large methods will also do. – BlueRaja - Danny Pflughoeft Nov 10 '15 at 21:11
  • 11
    Honestly, but this term "method bloat" is IMHO a **myth** invented by people who are looking for excuses for not refactoring too big methods into smaller ones. – Doc Brown Nov 10 '15 at 22:18
  • @DocBrown That statement could be read as the suggestion "create as many methods as you possibly can", while the point is actually to create the *right* number of . SRP - The question is *what exactly* are the responsibilities for the given case? SRP seems to imply to have *not more than one* responsibility per element, but in fact each element should have *exactly one* (meaningful) responsibility, which poses some kind of upper limit on the number of elements to use. – JimmyB Nov 11 '15 at 12:53
22

Often if you find places to create such a scope it's an opportunity to extract out a function.

In a language with pass-by-reference you would instead call swap(x,y).

For writing the file that would be advisable to use the block to ensure RAII will close the file and free up the resources as soon as possible.

ratchet freak
  • 25,706
  • 2
  • 62
  • 97
13

I assume you do not know, yet, of Expression-Oriented languages?

In Expression-Oriented languages, (nearly) everything is an expression. This means, for example, that a block can be an expression as in Rust:

// A typical function displaying x^3 + x^2 + x
fn typical_print_x3_x2_x(x: i32) {
    let y = x * x * x + x * x + x;
    println!("{}", y);
}

you may be worried that the compiler will redundantly compute x * x though, and decided to memorize the result:

// A memoizing function displaying x^3 + x^2 + x
fn memoizing_print_x3_x2_x(x: i32) {
    let x2 = x * x;
    let y = x * x2 + x2 + x;
    println!("{}", y);
}

However, now x2 clearly outlives its usefulness. Well, in Rust a block is an expression which returns the value of its last expression (not of its last statement, so avoid a closing ;):

// An expression-oriented function displaying x^3 + x^2 + x
fn expr_print_x3_x2_x(x: i32) {
    let y = {
        let x2 = x * x;
        x * x2 + x2 + x // <- missing semi-colon, this is an expression
    };
    println!("{}", y);
}

Thus I would say that newer languages have recognized the importance of limiting the scope of variables (makes things cleaner), and are increasingly offering facilities to do so.

Even notable C++ experts such as Herb Sutter are recommending anonymous blocks of this kind, "hacking" lambdas to initialize constant variables (because immutable is great):

int32_t const y = [&]{
    int32_t const x2 = x * x;
    return x * x2 + x2 + x;
}(); // do not forget to actually invoke the lambda with ()
Deduplicator
  • 8,591
  • 5
  • 31
  • 50
Matthieu M.
  • 14,567
  • 4
  • 44
  • 65
  • 2
    I came here to mention `let` expressions, and you beat me to it ;) In my opinion, that's exactly what "naked compound statement" is :) Your C++ example is also *very* widespread in the Javascript world. – Warbo Nov 10 '15 at 16:43
  • @Warbo: I imagine it exists in many languages (as long as they have anonymous functions), I picked C++ both because I am familiar with and because the recommendation came from the chair of the C++ Standard for... some 10 years I think? So that it's hard to be more authoritative. – Matthieu M. Nov 10 '15 at 17:25
8

Congratulations, you've got scope isolation of some trivial variables in a large and complex function.

Unfortunately, you've got a large and complex function. The proper thing to do is instead of creating a scope within the function for the variable, to extract it to its own function. This encourages reuse of the code and allows for the enclosing scope to be passed in as parameters to the function and not be pseudo-global variables for that anonymous scope.

This means that everything outside of the scope of the unnamed block is still in scope in the unnamed block. You are effectively programming with globals and unnamed functions run serially without any way to have code reuse.


Furthermore, consider that in many runtimes, all variable declaration within the anonymous scopes is declared and allocated at the top of the method.

Lets look at some C# (https://dotnetfiddle.net/QKtaG4):

using System;

public class Program
{
    public static void Main()
    {
        string h = "hello";
        string w = "world";
        {
            int i = 42;
            Console.WriteLine(i);
        }
        {
            int j = 4;
            Console.WriteLine(j);
        }
        Console.WriteLine(h + w);
    }
}

And when you start digging into the IL for the code (with dotnetfiddle, under 'tidy up' there is also 'View IL'), right there at the top it shows what has been allocated for this method:

.class public auto ansi beforefieldinit Program
    extends [mscorlib]System.Object
{
    .method public hidebysig static void Main() cli managed
    {
      //
      .maxstack 2
      .locals init (string V_0,
          string V_1,
          int32 V_2,
          int32 V_3)

This allocates the space for two strings and two integers at the initialization of the method Main, even though only one int32 is in scope at any given time. You can see a more in depth analysis of this on a tangential topic of initializing variables at Foreach loop and variable initialization.

Lets look at some Java code instead. This should look rather familiar.

public class Main {
    public static void main(String[] args) {
        String h = "hello";
        String w = "world";
        {
            int i = 42;
            System.out.println(i);
        }
        {
            int j = 4;
            System.out.println(j);
        }
        System.out.println(h + w);
    }
}

Compile this with javac, and then invoke javap -v Main.class and you get the Java class file disassembler.

Right there at the top, it tells you how many slots it needs for local variables (output of javap command goes into the explication of the parts of it a bit more):

  public static void main(java.lang.String[]);
    descriptor: ([Ljava/lang/String;)V
    flags: ACC_PUBLIC, ACC_STATIC
    Code:
      stack=3, locals=4, args_size=1

It needs four slots. Even though two of these local variables are never in the scope at the same time, it still allocates room for four local variables.

On the other hand, if you give the compiler the option, extracting short methods (such as a 'swap') will do a better job of inlining the method and making more optimal use of variables.

  • That analysis only considers different types in different blocks but doesn't test your variant. – ratchet freak Nov 10 '15 at 14:59
  • 4
    "This allocates the space for three integers at the initialization of the function foo" Couldn't a compiler that accounts for variable liveness, observing that `qux` and `baz` are never in scope at the same time, just reuse the space? Variable names are just tools to assist the programmer, the compiler doesn't need to preserve them (and in most compiled languages, it doesn't unless you embed symbols for debugging). – JAB Nov 10 '15 at 15:02
  • @ratchetfreak that was explored a bit in chat around that time and a bit awkward to extract. My example is *really* pushing the envelope of it (though I'll dig into the Java byte code and .net IL when I can more easily work on that without the distractions of this 'work' thing). –  Nov 10 '15 at 15:02
  • 5
    @JAB that depends on the *compiler*. In gcc with similar code, both are allocated `qux` and `baz` are assigned just a register, and its the *same* register in each case. With .net IL and Java bytecode, it is likely to allocate the space and then allow JIT to do the [escape analysis](https://en.wikipedia.org/wiki/Escape_analysis) on it which is much harder to dig into. –  Nov 10 '15 at 15:05
  • Well, if there's inlining going on, it's quite likely all variables for the function and all inlined functions are allocated in one go for efficiency. – Deduplicator Nov 10 '15 at 20:44
  • 2
    @Deduplicator In Java, inlining is done by the JIT, not javac. So... it gets tricky again. I am curious though to dig into this and find out what is *really* going on with the breaking down the bytecode for a class that has this behavior and poking at the JIT internals to turn on debugging to see if I can tease out the "this is what is happening." Though again, this "work" thing during the day can be quite distracting. –  Nov 10 '15 at 20:47
  • 1
    While I don't think that anything said in the answer or the comments is *wrong*, I wouldn't let these concerns guide my programming style (at least not at the expense of readability) unless I had evidence that there is a measurable effect in a particular case I care about. Register allocation is a very well studied field and your compiler is likely to do a much better job than you. – 5gon12eder Nov 10 '15 at 21:26
  • @5gon12eder If you are allocating a dozen `{}` blocks within the scope of one method with a few variables in each, this could get expensive on the stack. Granted, there are *other* problems if you have a method with that much complexity in it. This just adds to the problems. –  Nov 10 '15 at 21:31
  • @ratchetfreak I've adjusted the code and shown both the .net IL and the decompiled method header for the JVM. They are both allocating space for all the inner scopes at the top of the method declaration, even if you aren't using them at the same time. –  Nov 11 '15 at 03:41
  • @5gon12eder I dug into it a bit more. While C and C++ register allocation is well done with AOT compilation, languages that go to a byte code appear to allocate space for *all* of the variables used within the scope of the method, even if the variables themselves have mutually exclusive sub-scopes. –  Nov 11 '15 at 03:42
  • Don't stop at the IL level. When looking at the actual x86 assembly produced from that code, the locals don't exist anymore - just the simple "pass-by-ecx" code. IL is just a high-level intermediate language, it rarely looks anything like the resulting x86 code. In fact, locals are only rooted until they are no longer used - if the two locals existed, one would be reused regardless of scoping. Even replacing the constants with a `Console.ReadLine` avoids the stack entirely. IL is just a description of the program, not a bullet-list. IL uses the stack for everything, the x86 result doesn't. – Luaan Nov 11 '15 at 15:32
  • @Luaan as I run on a mac, the only visibility I have into IL is mono, which is a virtual machine much like the JVM. I don't get AOT compilation and optimizations - I can only see to the level of what gets passed to the JIT if it is done at all (and given the development environment for the mac for .net, that isn't a trivial thing). What I *do* see is that the IL layer allocates space for each distinct variable used within the entire scope of the method, and that is what is executed on my machine. –  Nov 11 '15 at 18:44
  • You can always attach a debugger after the application starts, giving you the real instructions executing - I'm not actually using AOT, this is all based on JITter output. It's a bit harder to read than IL, of course. However, I don't doubt that Mono is still far behind the "real" CLR implementation - I know that it has many issues with memory management, it wouldn't be a surprise if it's assembly generation was also more limited. It does mean you have to write silly code once in a while if you're targeting Mono :) – Luaan Nov 12 '15 at 07:26
3

Good practice. Period. Full stop. You make it explicit to future readers that the variable is not used anywhere else, enforced by the rules of the language.

This is provisional politeness for the future maintainers, because you tell them something you know. These facts could take dozens of minutes to determine in a sufficiently lengthy or convoluted function.

The seconds you'll spend documenting this powerful fact by adding a pair of braces will pay off minutes for each maintainer who will not have to determine this fact.

The future maintainer can be yourself, years later, (because you're the only one who knows this code) with other things on your mind (because the project has long been over and you've been reassigned ever since), and under pressure. (because we're doing this graciously for the customer, you know, they've been long time partners, you know, and we had to smooth out our commercial relationship with them because the last project we delivered was not that up to expectations, you know, and oh, it is just a small change and it should not take that long for an expert like you, and we've got no budget so don't do "overquality")

You'd hate yourself for not doing it.

  • And of course, extract functions from these scopes afterwards. That's hard. You have to find names for them. – Laurent LA RIZZA Nov 10 '15 at 16:16
  • 3
    A function may also defeat the purpose of the anonymous scope: to indicate that the set of variables are local to one part of the function. Extracting it to a function implies that the function is used in other functions as well - and that has the opposite meaning of what you may have intended. I personally use anonymous scopes wherever I can for I/O operations because I'd like to make it implicit that the I/O operation is being done between opening a filehandle and closing it – slebetman Nov 10 '15 at 16:44
  • @Laurent LA RIZZA You're right that naming is one of the Hard Things(TM), but it's [not always necessary](https://en.wikipedia.org/wiki/Anonymous_function) – Warbo Nov 10 '15 at 16:47
  • I've up-voted this answer but I would like it even better without the *“The future maintainer […] got a lunch with the CEO”* paragraph. That one is a little noisy. ;-) – 5gon12eder Nov 10 '15 at 21:31
  • 1
    If I ever want to know were a variable is actually used, I never rely on its scope. I just go to the variable, hit `*` (`vim` user...), and my editor shows me *all* of its occurences. I don't think about this, it's like turning the wheel when driving a car. However, when I see a naked block, it's not clear to me *why* that extra block is there. Is there a need to call some destructor at a specific time? So, the extra block would just send me looking for other reasons d'etre than just giving a hint where the use of some variable ends. I. e. it would just slow down my reading of the function. – cmaster - reinstate monica Nov 10 '15 at 21:49
  • @5gon12eder: removed the CEO part which is really not necessary. But left the paragraph because it is experience and testimony speaking. – Laurent LA RIZZA Nov 11 '15 at 10:48
  • @slebetman: I am being hand-wavy here. In noun-kingdom languages (C# and Java), I personally use private access and staticness to limit the scope of applicability of a named function. In C++, I use static linkage not to pollute class definitions with static private methods. But I always express exact scope of variables with unnamed scopes, and these scopes inform me that there may be an unexpressed concept lurking there. And about I/O, I always personally separate I/O calls from policy logic. – Laurent LA RIZZA Nov 11 '15 at 10:58
  • @LaurentLARIZZA: Understood. I come mostly from functional land :) – slebetman Nov 11 '15 at 20:24
  • @cmaster: Well, I have this kind of tools too. Variable usage will always give you information on... variable usage, whether the code is crappy or not. We're not talking about what to do when parsing any old code, (to which searching for variable usage is indeed the sole solution). Resharper even highlights the usages without me asking. We're talking about writing code so that it is better maintainable. Failing to document that the variable has a short usage span is a waste of the future maintainer's time. – Laurent LA RIZZA Sep 27 '16 at 20:25
3

Yes, naked blocks may limit variable scopes further than you would normally limit them. However:

  1. The only gain is an earlier end of the variables lifetime; you can easily, and entirely unobstrusively limit the beginning of its lifetime by moving the declaration down to the appropriate place. This is common practice, so we are already halfway of where you can get with naked blocks, and this half way comes for free.

  2. Typically, each variable has its own place where it ceases to be useful; just like each variable has its own place where it should be declared. So, if you wanted to limit all variables scopes as much as possible, you would end up with a block for every single variable in many cases.

  3. A naked block introduces additional structure to the function, making its operation harder to grasp. Together with point 2, this can render a function with fully restricted scopes pretty near unreadable.

So, I guess, the bottom line is that naked blocks simply don't pay off in the vast majority of cases. There are cases, where a naked block may make sense because you need to control where a destructor gets called, or because you have several variables with nearly identical end of use. Yet, especially in the later case you should definitely think about factoring the block out into its own function. The situations that are best solved by a naked block are extremely rare.

  • Adding (this) additional structure seems to make the method easier to grasp. I could see immediately that the enclosed statements are logically grouped together, helping to understand the flow of processing. In most modern editors I could also navigate to the start or the end of the block or collapse it, making the remaining code easier to read. – Alchymist Nov 11 '15 at 11:20
3

Yes, naked blocks are very rarely seen, and I think that's because they are very rarely needed.

One place I do use them myself is in switch statements:

case 'a': {
  int x = getAmbientTemperature();
  int y = getBackgroundIllumination();
  setVasculosity(x * y);
  break;
}
case 'b': {
  int x = getUltrification();
  int y = getMendacity();
  setVasculosity(x + y);
  break;
}

I tend to do this whenever I'm declaring variables within the branches of a switch, because it keeps the variables out of scope of subsequent branches.

Michael Kay
  • 3,360
  • 1
  • 15
  • 13
  • 1
    Interesting example which I think highlights one of the risks of using this approach. The break statements do not depend on anything in the block. They are linked with the case statements, however. As such they should be outside of the block. With that modification though, I like this example. – Alchymist Nov 11 '15 at 11:01
2

Additional blocks/scopes add additional verbiage. While the idea carries some initial appeal, you'll soon run into situations where it becomes unfeasible anyway because you have temporary variables with overlapping scope that cannot be properly reflected by a block structure.

So since you cannot make the block structure consistently reflect variable lifetimes anyway as soon as things get slightly more complex, bending over backwards for the simple cases where it does work seems like an exercise in eventual futility.

For data structures with destructors that lock up significant resources during their lifetime, there is the possibility to call the destructor explicitly. As opposed to using a block structure, this does not mandate a particular order of destruction for variables introduced at different points of time.

Of course blocks have their uses when they are tied with logical units: particularly when using macro programming, the scope of any variable not explicitly named as macro argument intended for output is best restricted to the macro body itself in order not to cause surprises when using a macro several times.

But as lifetime markers for variables in sequential execution, blocks tend to be overkill.

user203568
  • 39
  • 1
  • 2
    No. If you fear adding this kind of verbiage, your function is too long to begin with, and your objects too large. Add explicit blocks for variables without overlapping scopes, then extract methods for these blocks, The overlapping scopes will become a clean high level policy. There are far more nested scopes out there than overlapping scopes. – Laurent LA RIZZA Nov 10 '15 at 16:13
  • 1
    I fully agree with you except for the explicit destructor call: If you explicitly call a destructor on an automatic variable, that destructor will be called *twice*. Once when you call it, and once the variable goes out of scope. That's rarely what you want. Of course, you may allocate the object on the heap using `new` and `delete`, or use placement-new and an explicit destructor call, but an explicit destructor call alone is almost certainly a bug. – cmaster - reinstate monica Nov 11 '15 at 06:31
1

The following java code shows what I believe to be one of the best examples of how naked blocks can be useful.

As you can see, the compareTo() method has three comparisons to make, and the results of the first two need to be temporarily stored in a local. The local is just a 'difference' in both cases, but re-using the same local variable is a bad idea, and as a matter of fact on a decent IDE you can configure local variable reuse to cause a warning.

class MemberPosition implements Comparable<MemberPosition>
{
    final int derivationDepth;
    final int lineNumber;
    final int columnNumber;

    MemberPosition( int derivationDepth, int lineNumber, int columnNumber )
    {
        this.derivationDepth = derivationDepth;
        this.lineNumber = lineNumber;
        this.columnNumber = columnNumber;
    }

    @Override
    public int compareTo( MemberPosition o )
    {
        /* first, compare by derivation depth, so that all ancestor methods will be executed before all descendant methods. */
        {
            int d = Integer.compare( derivationDepth, o.derivationDepth );
            if( d != 0 )
                return d;
        }

        /* then, compare by line number, so that methods will be executed in the order in which they appear in the source file. */
        {
            int d = Integer.compare( lineNumber, o.lineNumber );
            if( d != 0 )
                return d;
        }

        /* finally, compare by column number.  You know, just in case you have multiple test methods on the same line.  Whatever. */
        return Integer.compare( columnNumber, o.columnNumber );
    }
}

Note how in this particular case you cannot offload the work to a separate function, as Kilian Foth's answer suggests. So, in cases like this, naked blocks are always my preference. But even in cases where you can in fact move the code to a separate function, I prefer a) keeping things in one place as to minimize the scrolling necessary in order to make sense of a piece of code, and b) not bloating my code with lots of functions. Definitely a good practice.

(Side note: one of the reasons why egyptian curly bracket style truly sucks is that it does not work with naked blocks.)

(Another side note that I remembered now, a month later: the code above is in Java, but I actually picked up the habit from my days of C++, where the closing curly bracket causes destructors to be invoked. This is the RAII bit that rachet freak also mentions in his answer, and it is not just a good thing, it is pretty much indispensable.)

Mike Nakis
  • 32,003
  • 7
  • 76
  • 111
  • IMHO using naked block like this is ok. But if I had to write this function by myself, I would prefer to use two different result variables `d1` and `d2`, or `dd` and `dl`, that would satisfy any IDEs warning, too, and avoid that naked block either. In my eyes, less braces and less indentation give better readability. – Doc Brown Nov 12 '15 at 13:32
  • Unnecessary inline blocks? Using `Integer.compare` to check for inequality? One letter variable-names? Verbose comments? This code is _terrible_. – BlueRaja - Danny Pflughoeft Nov 13 '15 at 21:09
  • @BlueRaja-DannyPflughoeft 1. `Integer.compare()` is not used to check for inequality here, it is used for relative comparison. And it is the right way to do it. 2. You can either complain about one-letter local variable names or about verbose comments, not both. Pick one. 3. Your *opinion* about my code being terrible is irrelevant to the question of whether this is a "useful answer", which is what the votes are meant to stand for. – Mike Nakis Nov 13 '15 at 23:12
  • What do you mean "egyptian curly bracket style... does not work with naked blocks"? It lets you easily see the difference between a naked block and a controlled block, which is an advantage. – Ben Voigt Jun 04 '16 at 23:42
  • @BenVoigt The egyptian curly bracket style requires the opening curly bracket to be placed at the end of the line of the controlling statement. Naked blocks have no controlling statement. Therefore, the two are incompatible. If you have something else in mind, I'd be curious to see an example. – Mike Nakis Jun 05 '16 at 00:55
  • @MikeNakis: That's what makes naked blocks easy to find... with egyptian brace style, a brace at the beginning of the line (perhaps) following whitespace indicates a naked block. Because the controlling statement is on the same line as the opening brace, you can immediately see that there is no controlling statement. In the "every brace on its own line" style, they all look alike. To find out if there is a controlling statement you have to scroll up. – Ben Voigt Jun 05 '16 at 01:13
  • @BenVoigt okay, I see. Duh, I should have thought of it. – Mike Nakis Jun 05 '16 at 18:07
  • @DocBrown your suggestion creates a possibility to make a programming mistake by using a wrong variable somewhere. Good IDE/static analysis tool would then warn about an unused variable though, but nevertheless, the code is more error-prone. – leventov Jul 10 '19 at 06:31