311

I have no idea what these are actually called, but I see them all the time. The Python implementation is something like:

x += 5 as a shorthand notation for x = x + 5.

But why is this considered good practice? I've run across it in nearly every book or programming tutorial I've read for Python, C, R so on and so forth. I get that it's convenient, saving three keystrokes including spaces. But they always seem to trip me up when I'm reading code, and at least to my mind, make it less readable, not more.

Am I missing some clear and obvious reason these are used all over the place?

J. Mini
  • 997
  • 8
  • 20
Fomite
  • 2,616
  • 6
  • 18
  • 20
  • @EricLippert: Does C# handle this in the same way as the top answer described? Is it actually more efficient CLR-wise to say `x += 5` than `x = x + 5`? Or is it truly just syntactic sugar as you suggest? – blesh Feb 10 '12 at 00:37
  • 27
    @blesh: That small details of how one expresses an addition in source code have an impact on *efficiency* of the resulting executable code might have been the case in 1970; it certainly is not now. Optimizing compilers are good, and you have bigger worries than a nanosecond here or there. The idea that the += operator was developed "twenty years ago" is obviously false; the late Dennis Richie developed C from 1969 through 1973 at Bell Labs. – Eric Lippert Feb 10 '12 at 01:19
  • 1
    See http://blogs.msdn.com/b/ericlippert/archive/2011/03/29/compound-assignment-part-one.aspx (see also part two) – SLaks Feb 10 '12 at 23:54
  • 5
    Most functional programmers will consider this bad practice. – Pete Kirkham May 20 '15 at 11:15
  • You can't have seen this in R. R doesn't have `+=`. – J. Mini Jul 09 '23 at 13:25

16 Answers16

611

It's not shorthand.

The += symbol appeared in the C language in the 1970s, and - with the C idea of "smart assembler" correspond to a clearly different machine instruction and adressing mode:

Things like "i=i+1", "i+=1" and "++i", although at an abstract level produce the same effect, correspond at low level to a different way of working of the processor.

In particular those three expressions, assuming the i variable resides in the memory address stored in a CPU register (let's name it D - think of it as a "pointer to int") and the ALU of the processor takes a parameter and return a result in an "accumulator" (let's call it A - think to it as an int).

With these constraints (very common in all microprocessors from that period), the translation will most likely be

;i = i+1;
MOV A,(D); //Move in A the content of the memory whose address is in D
ADD A, 1;  //The addition of an inlined constant
MOV (D) A; //Move the result back to i (this is the '=' of the expression)

;i+=1;
ADD (D),1; //Add an inlined constant to a memory address stored value

;++i;
INC (D); //Just "tick" a memory located counter

The first way of doing it is disoptimal, but it is more general when operating with variables instead of constant (ADD A, B or ADD A, (D+x)) or when translating more complex expressions (they all boil down in push low priority operation in a stack, call the high priority, pop and repeat until all the arguments had been eliminated).

The second is more typical of "state machine": we are no longer "evaluating an expression", but "operating a value": we still use the ALU, but avoid moving values around being the result allowed to replace the parameter. These kind of instruction cannot be used where more complicated expression are required: i = 3*i + i-2 cannot be operated in place, since i is required more times.

The third -even simpler- does not even consider the idea of "addition", but uses a more "primitive" (in computational sense) circuitry for a counter. The instruction is shorted, load faster and executes immediately, since the combinatorial network required to retrofit a register to make it a counter is smaller, and hence faster than the one of a full-adder.

With contemporary compilers (refer to C, by now), enabling compiler optimization, the correspondence can be swapped based on convenience, but there is still a conceptual difference in the semantics.

x += 5 means

  • Find the place identified by x
  • Add 5 to it

But x = x + 5 means:

  • Evaluate x+5
    • Find the place identified by x
    • Copy x into an accumulator
    • Add 5 to the accumulator
  • Store the result in x
    • Find the place identified by x
    • Copy the accumulator to it

Of course, optimization can

  • if "finding x" has no side effects, the two "finding" can be done once (and x become an address stored in a pointer register)
  • the two copies can be elided if the ADD is applied to &x instead to the accumulator

thus making the optimized code to coincide the x += 5 one.

But this can be done only if "finding x" has no side effects, otherwise

*(x()) = *(x()) + 5;

and

*(x()) += 5;

are semantically different, since x() side effects (admitting x() is a function doing weird things around and returning an int*) will be produced twice or once.

The equivalence between x = x + y and x += y is hence due to the particular case where += and = are applied to a direct l-value.

To move to Python, it inherited the syntax from C, but since there is no translation / optimization BEFORE the execution in interpreted languages, things are not necessarily so intimately related (since there is one less parsing step). However, an interpreter can refer to different execution routines for the three types of expression, taking advantage of different machine code depending on how the expression is formed and on the evaluation context.


For who likes more detail...

Every CPU has an ALU (arithmetic-logical unit) that is, in its very essence, a combinatorial network whose inputs and output are "plugged" to the registers and / or memory depending on the opcode of the instruction.

Binary operations are typically implemented as "modifier of an accumulator register with an input taken "somewhere", where somewhere can be - inside the instruction flow itself (typical for manifest contant: ADD A 5) - inside another registry (typical for expression computation with temporaries: e.g. ADD A B) - inside the memory, at an address given by a register (typical of data fetching e.g.: ADD A (H)) - H, in this case, work like a dereferencing pointer.

With this pseudocode, x += 5 is

ADD (X) 5

while x = x+5 is

MOVE A (X)
ADD A 5
MOVE (X) A

That is, x+5 gives a temporary that is later assigned. x += 5 operates directly on x.

The actual implementation depends on the real instruction set of the processor: If there is no ADD (.) c opcode, the first code becomes the second: no way.

If there is such an opcode, and optimization are enabled, the second expression, after eliminating the reverse moves and adjusted the registers opcode, become the first.

mt3
  • 101
  • 4
Emilio Garavaglia
  • 4,289
  • 1
  • 22
  • 23
  • 93
    +1 for the only answer explaining that it used to map to different (and more efficient) machine code back in the olden days. – Péter Török Feb 09 '12 at 08:47
  • C is not an assembly language. Assembly language programs specify CPU instructions; C programs specify behavior. – Keith Thompson Feb 09 '12 at 09:17
  • 11
    @KeithThompson That is true but one cannot deny that assembly had a huge influence over the design of the C language (and subsequently all C style languages) – MattDavey Feb 09 '12 at 09:39
  • 3
    @MattDavey: Don't underestimate the influence that C has since had on the design of assembly. E.g. the slow introduction of multi-core CPU's can be linked to the absence of threading in ISO C, and the complexity of it in POSIX C (compared to say Erlang). – MSalters Feb 09 '12 at 12:12
  • 51
    Erm, "+=" doesn't map to "inc" (it maps to "add"), "++" maps to "inc". – Brendan Feb 09 '12 at 12:46
  • +1 I was going to put something about the difference reflects the way that the operation is done in assembly. – Jonathan Henson Feb 09 '12 at 15:38
  • 1
    I'm not sure this is true, I think Brendans comment is onto something. Also, the answer is very specific to C / C++. Does this logic even hold up in something like Java / C# which run in virtual machines? – Andy Feb 09 '12 at 15:44
  • 13
    By "20 years ago", I think you mean "30 years ago". And BTW, COBOL had C beat by another 20 years with: ADD 5 TO X. – JoelFan Feb 09 '12 at 16:29
  • @Andy The point is that the logic describes the thinking in the ancestral language (C) in which the syntax was invented; the descendant languages (Java, C#) have just inherited the syntax. It no longer matters whether the logic applies to those environments. – phoog Feb 09 '12 at 16:37
  • +1 for discussion of side effects. It's a bit abstract, though; maybe it should be specified that `x()` returns an `int&`, or use an example of `std::map::operator[]` or the like. – fluffy Feb 09 '12 at 16:50
  • @fluffy I'd make it more clear that this is the historical reason, as the answer makes it sound like the only one. Since the question is tagged with R and Python which appeared in the 90s and to which the answer won't apply (exept that they borrowed from an older language). – Andy Feb 09 '12 at 17:58
  • @Andy good point, I should pay closer attention to the tags. I'm not familiar enough with Python, but does it have a concept of property getters/setters? (Although I suppose in that case the two constructs would be equivalent.) – fluffy Feb 09 '12 at 18:29
  • 11
    Great in theory; wrong in facts. The x86 ASM INC only adds 1, so it doesn't affect the "add and assign" operator discussed here (this would be a great answer for "++" and "--" though). – Mark Brackett Feb 09 '12 at 18:45
  • 8
    @joelFan: By "30 years ago" I think you mean "43 years ago". – Eric Lippert Feb 10 '12 at 01:20
  • While "not inc, add" is correct, `+=` would still use a different addressing mode to `+`. In a naive translation, this could still avoid some memory accesses, replacing multiple instructions with just one. Proviso - I don't know the assembler that C was originally designed for, and am guessing based on general assembler principles. –  Feb 10 '12 at 01:26
  • I really love the `*(x()) = *(x()) + 5; / *(x()) += 5;` comparison. It sums up the previous discussion just perfectly! – Julian F. Weinert May 19 '15 at 22:12
292

Depending on how you think about it, it's actually easier to understand because it's more straightforward. Take, for example:

x = x + 5 invokes the mental processing of "take x, add five to it, and then assign that new value back to x"

x += 5 can be thought of as "increase x by 5"

So, it's not just shorthand, it actually describes the functionality much more directly. When reading through gobs of code, it's much easier to grasp.

Eric King
  • 10,876
  • 3
  • 41
  • 55
  • 34
    +1, I totally agree. I got into programming as a kid, when my mind was easily malleable, and `x = x + 5` still troubled me. When I got into maths at a later age, it bothered me even more. Using `x += 5` is significantly more descriptive and makes much more sense as an expression. – Polynomial Feb 09 '12 at 12:14
  • 49
    There's also the case where the variable has a long name: `reallyreallyreallylongvariablename = reallyreallyreallylongvariablename + 1` ... oh noes!!! a typo –  Feb 09 '12 at 13:44
  • 9
    @Matt Fenwick: It doesn't have to be a long variable name; it could be an expression of some sort. Those are likely to be even harder to verify, and the reader has to spend a lot of attention to make sure they're the same. – David Thornley Feb 09 '12 at 14:57
  • 34
    X=X+5 is sort of a hack when your goal is to increment x by 5. – JeffO Feb 10 '12 at 01:08
  • While I've agreed with this get-rid-of-the-clutter principle for a long time, I've recently started to question it. I'm buying into Simon Peyton Jones argument that there's more benefits from multicore if you mostly avoid mutating variables, doing as much as possible in a pure functional way. Notations that make mutating convenient are, from that viewpoint, a bad thing. –  Feb 10 '12 at 01:38
  • I agree, I see it as more of a "Mental" shorthand than a typing shorthand. If I had a case where I actually wanted to add 5 to x and store it in x (as opposed to increment x by 5) I'd probably write it that way, but I can't come up with a case where that makes much sense. – Bill K Feb 10 '12 at 17:22
  • 1
    @Polynomial I think I came across the concept of x = x + 5 as a kid also. 9 years old if I remember correctly - and it made perfect sense to me and it still makes perfect sense to me now and much prefer it to x += 5. The first is much more verbose and I can imagine the concept much more clearly - the idea that I'm assigning x to be the previous value of x (whatever that is) and then adding 5. I guess different brains work in different ways. – Chris Harrison May 20 '15 at 03:39
  • "Pensi davvero che sia più semplice?" / "Do you really think that's simpler?" Which of the two is "simpler" to understand is a matter of culture (and may be different among individuals) not symbols. – Emilio Garavaglia Jul 26 '15 at 07:23
  • Well that all sounds more like a theory. I don't think of anything like what you describe when I read the code. For me much more important is that the image is "clean" and something like x += 1 is anything but a clean image and causes quite noticable readability worsening. – Mikhail V Nov 16 '16 at 06:42
  • Also it is not clear what exactly you want to "grasp" in gobs of code. If that is only the last part of expression ("5" in your example), then well may be. But I personally most of the time want to grasp the *operation*, namely if it is summation or decrement. So the part on the right side of x = (x + 5) tells it in much clearer and readable way. So I can't even imagine what makes you think that these shortcuts can be ever easier on eyes. – Mikhail V Nov 16 '16 at 07:24
  • @MikhailV I concede that 'readability' is subjective. What I find readable may look foreign to you, and vice-versa. However, But I still submit that `x += 5` is _not a shortcut_... It's precisely describing exactly the operation to perform. If anything, `x = x+5` is the long-winded way (a long-cut?) of accomplishing the task. – Eric King Nov 16 '16 at 15:08
  • @EricKing, it is kind of opinion based, but in this case I try to be objective and simply compare two syntaxes visually. And just imagine for a moment, that you could write simply (x + 5) without x= part. Would not it "describe" the operation also? It is not a valid syntax, but I visually parse this expression in gobs of code, and its clean. And += is nowhere as clean. – Mikhail V Nov 16 '16 at 17:54
  • @MikhailV No, I disagree because the `x + 5` fragment is not telling the whole picture. What are you doing with `x + 5` ? You are assigning it back to `x`. Which is much more clearly expressed with `x += 5`. I think that your discomfort with the `+=` syntax comes simply from unfamiliarity.I don't believe `+=` is inherently any less 'clean' than the alternative. Quite the opposite in fact. – Eric King Nov 16 '16 at 18:23
  • In other words, you are taking one logical operation (increase x by 5) and breaking it into two operations (add 5 to x, update x to the new value). If the _intent_ is to increase x by 5, then the += syntax is simpler, more accurate and more expressive of intent. The x = x+5 syntax misses on all points. – Eric King Nov 16 '16 at 18:29
  • Yes it expresses something. I was not even telling about it, so I adressed only readability issue, since you have touched the "gobs of code" in your answer. But even if we talk about increment "expression" (which you don't need for python's integers, since you can't increment like in C): a better expression would be "add x, 5" or "do (x + 5)" for example. Those are invented things but they does not have readability issues at least, which is the most important thing about coding. For me x = x + 5 expresses absolutely clear my intention to increase x by 5 and I don't need to focus eyes on it. – Mikhail V Nov 16 '16 at 21:38
  • An why I must get familiar with ugly syntax? Because of convention? No thanks, we will not go far with such logic. – Mikhail V Nov 16 '16 at 21:48
  • @MikhailV I understand. We disagree about the fundamentals of this issue, and that's ok. I do, though, think that it's only "ugly" to you _precisely because_ you are unfamiliar with it. I highly suspect that if you had learned the += syntax first, the opposite would look ugly, for the reasons I outlined. To each his own. – Eric King Nov 16 '16 at 22:40
52

At least in Python, x += y and x = x + y can do completely different things.

For example, if we do

a = []
b = a

then a += [3] will result in a == b == [3], while a = a + [3] will result in a == [3] and b == []. That is, += modifies the object in-place (well, it might do, you can define the __iadd__ method to do pretty much anything you like), while = creates a new object and binds the variable to it.

This is very important when doing numerical work with NumPy, as you frequently end up with multiple references to different parts of an array, and it is important to make sure you don't inadvertently modify part of an array that there are other references to, or needlessly copy arrays (which can be very expensive).

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
James
  • 101
  • 1
  • 4
  • 4
    +1 for `__iadd__`: there are languages where you can create an immutable reference to a mutable datastructure, where the `operator +=` is defined, e.g. scala: `val sb = StringBuffer(); lb += "mutable structure"` vs `var s = ""; s += "mutable variable"`. the further modifies the content of the datastructure while the latter makes the variable point to a new one. – flying sheep Feb 10 '12 at 13:08
45

It is called an idiom. Programming idioms are useful because they are a consistent way of writing a particular programming construct.

Whenever someone writes x += y you know that x is being incremented by y and not some more complex operation (as a best practice, typically I wouldn't mix more complicated operations and these syntax shorthands). This makes the most sense when incrementing by 1.

Joe
  • 299
  • 2
  • 5
  • ...which is why, in C, there's an even more specific shorthand notation for that: `++x` (and/or `x++`). – Ilmari Karonen Feb 09 '12 at 13:17
  • 13
    x++ and ++x are slightly different to x+=1 and to each other. – Gary Willoughby Feb 09 '12 at 13:44
  • 2
    @Gary: `++x` and `x+=1` are equivalent in C and Java (maybe also C#) though not necessarily so in C++ because of the complex operator semantics there. The key is that they both evaluate `x` once, increment the variable by one, and have a result that is the content of the variable after evaluation. – Donal Fellows Feb 10 '12 at 10:02
  • My comment was to highlight that Ilmari's comment might not be accurate. – Gary Willoughby Feb 10 '12 at 18:52
  • 3
    @Donal Fellows: The precedence is different, so `++x + 3` isn't the same as `x += 1 + 3`. Parenthesize the `x += 1` and it's identical. As statements by themselves, they're identical. – David Thornley Feb 10 '12 at 21:21
  • 1
    @DavidThornley: It's hard to say "they're identical" when they both have undefined behaviour :) – Lightness Races in Orbit May 19 '15 at 21:36
  • 2
    None of the expressions `++x + 3` , `x += 1 + 3` or `(x += 1) + 3` have undefined behavior (assuming the resulting value "fits"). – John Hascall May 19 '15 at 22:55
42

To put @Pubby's point a little clearer, consider someObj.foo.bar.func(x, y, z).baz += 5

Without the += operator, there are two ways to go:

  1. someObj.foo.bar.func(x, y, z).baz = someObj.foo.bar.func(x, y, z).baz + 5. This is not only awfully redundant and long, it's also slower. Therefore one would have to
  2. Use a temporary variable: tmp := someObj.foo.bar.func(x, y, z); tmp.baz = tmp.bar + 5. This is ok, but it's a lot of noise for a simple thing. This is actually really close to what happens at runtime, but it's tedious to write and just using += will shift the work to the compiler/interpreter.

The advantage of += and other such operators is undeniable, while getting used to them is only a matter of time.

back2dos
  • 29,980
  • 3
  • 73
  • 114
  • Once you are 3 levels deep into the object chain, you can stop caring about small optimizations like 1, and noisy code like 2. Instead, think over your design one more time. – Dorus Feb 09 '12 at 11:48
  • 4
    @Dorus: The expression I chose is just an arbitrary representative for "complex expression". Feel free to replace it by something in your head, that you wont nitpick about ;) – back2dos Feb 09 '12 at 12:50
  • 9
    +1: This is the principle reason for this optimization -- it's **always** correct, no matter how complex the left-hand-side expression is. – S.Lott Feb 09 '12 at 13:24
  • 9
    Putting a typo in there is a nice illustration of what can happen. – David Thornley Feb 09 '12 at 14:59
25

It's true that it's shorter and easier, and it's true that it was probably inspired by the underlying assembly language, but the reason it's best practice is that it prevents a whole class of errors, and it makes it easier to review the code and be sure what it does.

With

RidiculouslyComplexName += 1;

Since there's only one variable name involved, you're sure what the statement does.

With RidiculouslyComplexName = RidiculosulyComplexName + 1;

There's always doubt that the two sides are exactly the same. Did you see the bug? It gets even worse when subscripts and qualifiers are present.

Jamie Cox
  • 101
  • 1
  • 3
  • 6
    It gets worse yet in languages where assignment statements can implicitly create variables, especially if the languages are case-sensitive. – supercat Jun 12 '14 at 22:35
17

While the += notation is idiomatic and shorter, these are not the reasons why it is easier to read. The most important part of reading code is mapping syntax to meaning, and so the closer the syntax matches the programmer's thought processes, the more readable it will be (this is also the reason why boilerplate code is bad: it is not part of the thought process, but still necessary to make the code function). In this case, the thought is "increment variable x by 5", not "let x be the value of x plus 5".

There are other cases where a shorter notation is bad for readability, for example when you use a ternary operator where an if statement would be more appropriate.

tdammers
  • 52,406
  • 14
  • 106
  • 154
15

For some insight to why these operators are in the 'C-style' languages to begin with, there's this excerpt from K&R 1st Edition (1978), 34 years ago:

Quite apart from conciseness, assignment operators have the advantage that they correspond better to the way people think. We say "add 2 to i" or "increment i by 2," not "take i, add 2, then put the result back in i." Thus i += 2. In addition, for a complicated expression like

yyval[yypv[p3+p4] + yypv[p1+p2]] += 2

the assignment operator makes the code easier to understand, since the reader doesn't have to check painstakingly that two long expressions are indeed the same, or wonder why they're not. And an assignment operator may even help the compiler to produce more efficient code.

I think it's clear from this passage that Brian Kernighan and Dennis Ritchie (K&R), believed that compound assignment operators helped with code readability.

It's been a long time since K&R wrote that, and a lot of the 'best practices' about how people should write code has changed or evolved since then. But this programmers.stackexchange question is the first time I can recall someone voicing a complaint about the readability of compound assignments, so I wonder if many programmers find them to be a problem? Then again, as I type this the question has 95 upvotes, so maybe people do find them jarring when reading code.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Michael Burr
  • 161
  • 5
9

Besides readability, they actually do different things:+= doesn't have to evaluate its left operand twice.

For instance, expr = expr + 5 would evalaute expr twice (assuming expr is impure).

Pubby
  • 3,290
  • 1
  • 21
  • 26
  • For all but the strangest compilers, it does not matter. Most compilers are smart enough to generate the same binary for `expr = expr + 5` and `expr += 5` – vsz Feb 09 '12 at 07:28
  • 7
    @vsz Not if `expr` has side effects. – Pubby Feb 09 '12 at 07:36
  • 7
    @vsz: In C, if `expr` has side effects, then `expr = expr + 5` *must* invoke those side effects twice. – Keith Thompson Feb 09 '12 at 09:18
  • @Pubby: if it has side effects, there is no issue about "convenience" and "readability", which was the point of the original question. – vsz Feb 09 '12 at 11:14
  • 2
    Better be careful about those "side effects" statements. Reads and writes to `volatile` are side effects, and `x=x+5` and `x+=5` have the same side effects when `x` is `volatile` – MSalters Feb 09 '12 at 12:13
6

It's concise.

It's much shorter to type. It involves fewer operators. It has less surface area and less opportunity for confusion.

It uses a more specific operator.

This is a contrived example, and I'm not sure if actual compilers implement this. x += y actually uses one argument and one operator and modifies x in place. x = x + y could have an intermediate representation of x = z where z is x + y. The latter uses two operators, addition and assignment, and a temporary variable. The single operator makes it super clear that the value side can't be anything other than y and doesn't need to be interpreted. And there could theoretically be some fancy CPU that has a plus-equals operator that runs faster than a plus operator and an assignment operator in series.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Mark Canlas
  • 3,986
  • 1
  • 29
  • 36
  • 2
    Theoretically schmeoretically. The `ADD` instructions on many CPUs have variants that operate directly on registers or memory using other registers, memory or constants as the second addend. Not all combinations are available (e.g. add memory to memory), but there are enough to be useful. At any rate, any compiler with a decent optimizer will know to generate the same code for `x = x + y` as it would for `x += y`. – Blrfl Feb 09 '12 at 11:43
  • Hence "contrived". – Mark Canlas Feb 09 '12 at 15:37
6

Besides the obvious merits which other people described very well, when you have very long names it is more compact.

  MyVeryVeryVeryVeryVeryLongName += 1;

or

  MyVeryVeryVeryVeryVeryLongName =  MyVeryVeryVeryVeryVeryLongName + 1;
5

It is a nice idiom. Whether it is faster or not depends on the language. In C, it is faster because it translates to an instruction to increase the variable by the right hand side. Modern languages, including Python, Ruby, C, C++ and Java all support the op= syntax. It's compact, and you get used to it quickly. Since you will see it a whole lot in other peoples' code (OPC), you may as well get used to it and use it. Here is what happens in a couple of other languages.

In Python, typing x += 5 still causes the creation of the integer object 1 (although it may be drawn from a pool) and the orphaning of the integer object containing 5.

In Java, it causes a tacit cast to occur. Try typing

int x = 4;
x = x + 5.2  // This causes a compiler error
x += 5.2     // This is not an error; an implicit cast is done.
Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
ncmathsadist
  • 510
  • 6
  • 8
  • Modern _imperative_ languages. In (pure) functional languages `x+=5` has as little meaning as `x=x+5`; for instance in Haskell the latter does _not_ cause `x` to be incremented by 5 – instead it incurs an infinite recursion loop. Who'd want a shorthand for that? – leftaroundabout Feb 09 '12 at 20:06
  • It's not necessarily faster at all with modern compilers : even in C. – RichieHH May 19 '15 at 21:47
  • The speed of `+=` is not so much language dependent as it is *processor* dependent. For instance, X86 is a two address architecture, it only supports `+=` natively. A statement like `a = b + c;` must be compiled as `a = b; a += c;` because there is simply no instruction that can put the result of the addition anywhere else than into the place of one of the summands. The Power architecture, by contrast, is a three address architecture that does not have special commands for `+=`. On this architecture, the statements `a += b;` and `a = a + b;` always compile to the same code. – cmaster - reinstate monica Jul 19 '15 at 14:26
  • If one results in different object code to the other, your compiler is extremely low quality. – Toby Speight Jul 09 '23 at 13:42
4

Operators such as += are very useful when you're using a variable as an accumulator, i.e. a running total:

x += 2;
x += 5;
x -= 3;

Is a lot easier to read than:

x = x + 2;
x = x + 5;
x = x - 3;

In the first case, conceptually, you're modifying the value in x. In the second case, you're computing a new value and assigning it to x each time. And while you'd probably never write code that's quite that simple, the idea remains the same... the focus is on what you're doing to an existing value instead of creating some new value.

Caleb
  • 38,959
  • 8
  • 94
  • 152
  • For me it is exatcly the opposite - first variant is like dirt on the screen and second is clean and readable. – Mikhail V Nov 16 '16 at 06:10
  • 2
    Different strokes for different folks, @MikhailV, but if you're new to programming you may change your view after a while. – Caleb Nov 16 '16 at 07:34
  • I am not so new to programming, I just think you've simply got used to += notation over a long time and therefore you can read it. Which does not initially make it any good looking syntax, so a + operator surrounded by spaces is objectively cleaner then += and friends which are all barely distinguishable. Not that your answer is incorrect, but one should not too much rely on own habit making consumptions how "easy it is to read". – Mikhail V Nov 16 '16 at 17:31
  • My feeling is that it's more about the conceptual difference between accumulating and storing new values than about making the expression more compact. Most imperative programming languages have similar operators, so I seem not to be the only one who finds it useful. But like I said, different stroke for different folks. Don't use it if you don't like it. If you want to continue the discussion, perhaps [chat] would be more appropriate. – Caleb Nov 16 '16 at 19:19
1

Consider this

(some_object[index])->some_other_object[more] += 5

D0 you really want to write

(some_object[index])->some_other_object[more] = (some_object[index])->some_other_object[more] + 5
S.Lott
  • 45,264
  • 6
  • 90
  • 154
1

Say it once and only once: in x = x + 1, I say 'x' twice.

But do not ever write, 'a = b +=1' or we will have to kill 10 kittens, 27 mice, a dog and a hamster.


You should never change the value of a variable, as it makes it easier to prove the code is correct — see functional programming. However if you do, then it is better no say things only once.

ctrl-alt-delor
  • 570
  • 4
  • 9
1

The other answers target the more common cases, but there is another reason: In some programming languages, it can be overloaded; e.g. Scala.


Small Scala lesson:

var j = 5 #Creates a variable
j += 4    #Compiles

val i = 5 #Creates a constant
i += 4    #Doesn’t compile

If a class only defines the + operator, x+=y is indeed a shortcut of x=x+y.

If a class overloads +=, however, they are not:

var a = ""
a += "This works. a now points to a new String."

val b = ""
b += "This doesn’t compile, as b cannot be reassigned."

val c = StringBuffer() #implements +=
c += "This works, as StringBuffer implements “+=(c: String)”."

Additionally, operators + and += are two separate operators (and not only these: +a, ++a, a++, a+b a += b are different operators as well); in languages where operator overloading is available this might create interesting situations. Just as described above - if you'll overload the + operator to perform the adding, bear in mind that += will have to be overloaded as well.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
flying sheep
  • 139
  • 6
  • "If a class only defines the + operator, x+=y is indeed a shortcut of x=x+y." But surely it also works the other way around? In C++, it's very common to first define `+=` and then define `a+b` as "make a copy of `a`, put `b` on it using `+=`, return the copy of `a`". – leftaroundabout Feb 11 '12 at 13:42