7

Java garbage collection takes care of dead objects on the heap, but freezes the world sometimes. In C++ I have to call delete to dispose a created object at the end of it's life cycle.

This delete seems like a very low price to pay for non-freezing environment. Placing all relevant delete keywords is a mechanical task. One can write a script that would travel through the code and place delete once no new branches use a given object.

So, what are the pros and cons of Java build in vs C++ diy model of garbage collection.


I don't want to start a C++ vs Java thread. My question is different.
This whole GC thing - does it boil down to "just be neat, don't forget to delete objects you have created - and you will not need any dedicated GC? Or is it more like "disposing objects in C++ is really tricky - I spend 20% of my time on it and yet, memory leaks are a common place"?

Nicol Bolas
  • 11,813
  • 4
  • 37
  • 46
sixtytrees
  • 469
  • 5
  • 10
  • 15
    I find it difficult to believe that your magic script could exist. Apart from anything wouldn't it have to contain a solution to the halting problem. Or alternatively (and more likely) only work for incredibly simple programs – Richard Tingle Jun 16 '16 at 19:07
  • 1
    Also modern java doesn't freeze except under extreme condition where vast quantities of garbage are created – Richard Tingle Jun 16 '16 at 19:09
  • This isn't really a halting problem. This is rather a "busy beaver" problem. – sixtytrees Jun 16 '16 at 19:11
  • I have never had garbage collection make a material impact on performance of my code in either java or c#. Am I in the minority? – Thomas Carlisle Jun 16 '16 at 23:06
  • 1
    @ThomasCarlisle well it can - and occasionally does - impact performance. But there are lots of parameters to tweak in these cases, and sometimes the solution may be to switch to a different gc altogether. It all depends on the amount of available resources and the typical load. – Hulk Jun 17 '16 at 00:24
  • 1
    @ThomasCarlisle There's a world of difference between having an impact and that impact being noticed. The latter is especially unlikely if your resources far exceed the requirements, or you don't actually have something to compare against. – Deduplicator Jun 17 '16 at 11:30
  • 4
    "*Placing all relevant delete keywords is a mechanical task*" -- That's why there are tools to detect memory leaks. Because it is so simple and not error-prone at all. – JensG Jun 17 '16 at 13:35
  • It's worth pointing out that allocating memory dynamically with `new` and `delete` involves similar levels of bookkeeping and complex algorithms as a garbage collector. You can't just naively hand out chunks of memory or it'll end up full of holes. – Doval Jun 17 '16 at 15:41
  • I'm not saying that garbage collection doesn't have an impact. I am just pointing out that I have not had to spend time troubleshooting or working around GC "freezes", and have never had it happen that garbage collection was a significant bottleneck to meeting processing time objectives. There is a lot more going on in memory management than just making sure every object is disposed to prevent leaks. Whether you code in c++ and mange memory yourself or use a GC language and distance yourself from those details to focus more on the business logic, it takes compute cycles to manage memory. – Thomas Carlisle Jun 17 '16 at 19:57
  • 1
    @ThomasCarlisle: On mobile platforms, especially in games, GC matters a lot. – Thomas Jun 20 '16 at 16:30
  • @Thomas And in contexts like that it's often worth the effort to use a language that explicitly manages memory, so that you can better address that. For the large percentage of applications that don't have such concerns, they can feel comfortable using a managed memory language. – Servy Jun 20 '16 at 19:24
  • 2
    @Doval, if you're using RAII correctly, which is pretty simple, there's almost no bookkeeping at all. `new` goes in the constructor of your class which manages the memory, `delete` goes in the destructor. From there, it's all automatic (storage). N.B. that this works for all types of resources, not just memory, unlike garbage collection. Mutexes are taken in a constructor, released in a destructor. Files are opened in a constructor, closed in a destructor. – Rob K Jun 20 '16 at 20:07
  • @RobK By bookkeeping I wasn't referring to code, I was talking about the memory allocator, i.e. the work `new` and `delete` have to do behind the scenes to keep allocation and deallocation efficient. – Doval Jun 20 '16 at 20:10
  • @sixtytrees - "This isn't really a halting problem. This is rather a "busy beaver" problem." - the two are equivalent. A solution to the halting problem can be turned into a solution to the busy beaver problem by enumerating all n-state turing machines, filtering those that do not halt, then running the ones that do halt to find the one with the largest output. – Jules Jun 21 '16 at 07:36
  • And a solution to the busy beaver problem can be turned into a solution to the halting problem by running machines until they either (1) halt, (2) repeat a state with identical tape content or (3) use more tape than the busy beaver output for turing machines with the same number of states. In either of cases 2 or 3 we can conclude that the machine does not halt. If (1) or (2) does not happen, we can conclude that (3) must eventually happen as all other possible tape configurations are used up. – Jules Jun 21 '16 at 07:43

8 Answers8

18

The C++ object lifecycle

If you create local objects, you don't need to delete them: the compiler generates code to delete them automatically when the object goes out of scope

If you use object pointers and create objects on the free store, then you have to take care of deleting the object when it's no longer needed (as you have described). Unfortunately, in complex software this might be much more challenging than it looks like (e.g. what if an exception is raised and the expected delete part is never reached ?).

Fortunately, in modern C++ (aka C++11 and later), you have smart pointers, such as for example shared_ptr. They reference-count the created object in a very efficient manner, a little bit like a garbage collector would do in Java. And as soon as the object is no longer referenced, the last active shared_ptr deletes the object for you. Automatically. Like garbage collector, but one object at a time and without delay (Ok: you need some extra care and weak_ptr to cope with circular references).

Conclusion: you can nowadays write C++ code without having to worry about memory allocation, and which is as leak-free as with a GC, but without the freeze effect.

The Java object lifecycle

The nice thing is that you don't have to worry about object lifecycle. You just create them and java takes care of the rest. A modern GC will identify and destroy the objects that are no longer needed (including if there are circular references between dead objects).

Unfortunately, due to this comfort, you have no real control of when the object is really deleted. Semantically, the deletion/destruction coincides with the garbage collection.

This is perfectly fine if looking at objects only in terms of memory. Except for the freeze, but these are not a fatality (people are working on this). I'm not Java expert, but I think that the delayed destruction makes it more difficult to identify leaking in java due to references accidentally kept despite objects are no longer needed (i.e. you can't really monitor the deletion of objects).

But what if the object has to control other resources than memory, for example an open file, a semaphore, a system service ? Your class must provide a method to release these resources. And you'll have the responsibility to make sure that this method is called when the resources are no longer needed. In every possible branching path through your code, ensuring it is also invoked in case of exceptions. The challenge is very similar the explicit deletion in C++.

Conclusion: the GC solves a memory management issue. But it doesn't address the the management of other system resources. The absence of "just-in-time" deletion might make resource management very challenging.

Deletion, garbage collection, and RAII

When you can control the deletion of an object and the the destructor that is to be invoked at deletion, you can take benefit of the RAII. This approach views memory only as a special case of resource allocation and links resource management more safely to the object life cycle, thus ensuring tightly controlled usage of resources.

Christophe
  • 74,672
  • 10
  • 115
  • 187
  • 3
    The beauty of (modern) garbage collectors is that you don't really need to think about cyclic references. Such groups of Objects that are unreachable except from each other get detected and collected. That is a huge advantage over simple reference counting/smart pointers. – Hulk Jun 17 '16 at 00:37
  • 6
    +1 The "exceptions" part cannot be stressed enough. The presence of exceptions make the hard and pointless task of manual memory management virtually impossible, and thus manual memory management isn't used in C++. Use RAII. Don't use `new` outside of constructors/smart pointers, and never use `delete` outside of a destructor. – Felix Dombek Jun 17 '16 at 00:39
  • @Hulk You have a point here ! Although most of my arguments are still valid, GC made a lot of progress. And circular references are indeed difficult to handle with reference counting smart pointers alone. So I edited my answer accordingly, in order to keep a better balance. I also added a reference to an article about possible strategies to mitigate the freeze effect. – Christophe Jun 17 '16 at 06:07
  • 3
    +1 The management of resources other than memory does indeed require some extra effort for GC languages because RAII does not work if there is no fine-grained control over when (or if) deletion/finalisation of an object happpens. Special constructs are available in most of these languages (see java's [try-with-resources](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html), for example), which can, in combination with compiler warnings, help to get these things right. – Hulk Jun 17 '16 at 07:59
  • 1
    `Like garbage collector, but one object at a time and without delay (Ok: you need some extra care and weak_ptr to cope with circular references).` Reference counting can cascade though. E.g. The last reference to `A` goes away, but `A` also has the last reference to `B`, who has the last reference to `C`... `And you'll have the responsibility to make sure that this method is called when the resources are no longer needed. In every possible branching path through your code, ensuring it is also invoked in case of exceptions.` Java and C# have special block statements for this. – Doval Jun 17 '16 at 15:39
  • How can `weak_ptr` help with circular references? – curiousguy Jun 19 '16 at 20:09
  • @curiousguy See https://visualstudiomagazine.com/articles/2012/10/19/circular-references.aspx and http://stackoverflow.com/a/4984581/3723423 for example – Christophe Jun 19 '16 at 21:52
  • @Christophe I see: with a non trivial layer built using `weak_ptr`, you can manage circular references. – curiousguy Jun 20 '16 at 05:42
  • @Doval - indeed, one of the primary performance gains cited for GC is that disposing large structures with multiple internal references can happen out-of-band (potentially even in another thread) for GC whereas it typically happens during execution of the operation that caused the disposal and on the same thread for reference-counted implementations. In the extreme, e.g. in short-lived utility programs, GC can often *completely ignore* the requirement to delete such structures, which can be a huge gain.... – Jules Jun 21 '16 at 07:23
  • ... Consider large GUI applications written in C++. It's typical for such applications to spend a noticeable amount of time destroying resources *while the application is in the process of exiting*. In GC-based languages, this doesn't happen. If a process exits, it just ignores the requirement to delete any memory allocated, and can therefore do so faster. – Jules Jun 21 '16 at 07:29
  • @Jules many leading video games would match the definition of "large gui application written in C++" but I don't think they show slowdown neither during the game nor at exit. Do you have any objective study/measurements that would support your last claim ? – Christophe Jun 21 '16 at 17:00
5

This delete seems like a very low price to pay for non-freezing environment. Placing all relevant delete keywords is a mechanical task. One can write a script that would travel through the code and place delete once no new branches use a given object.

If you can write a script like that, congratulations. You're a better developer than I am. By far.

The only way you can actually avoid memory leaks in practical cases is very strict coding standards with very strict rules who is the owner of an object, and when it can and must be released, or tools like smart pointers that count references to objects and delete objects when the last reference is gone.

Nicol Bolas
  • 11,813
  • 4
  • 37
  • 46
gnasher729
  • 42,090
  • 4
  • 59
  • 119
  • 4
    Yes, doing manual resource-management is always error-prone, luckily it's only neccessary in Java (for non-memory resources), as C++ has RAII. – Deduplicator Jun 17 '16 at 11:40
  • The only way you can actually avoid ANY resource leaks in practical cases is very strict coding standards with very strict rules who is the owner of an object... memory isn't the only resource, and in most cases, not the most critical one either. – curiousguy Jun 20 '16 at 05:41
  • 2
    If you consider RAII a 'very strict coding standard'. I consider it a 'pit of success', trivially easy to use. – Rob K Jun 20 '16 at 19:59
5

If you write correct C++ code with RAII you usually don't write any new or delete. The only "new" you write are inside shared pointers so you really never have to use "delete".

Nikko
  • 652
  • 6
  • 14
  • 1
    You shouldn't be using `new` at all, even with shared pointers -- you should be using `std::make_shared` instead. – Jules Jun 21 '16 at 06:55
  • 1
    or better, `make_unique`. It is really quite rare that you actually need shared ownership. – Marc Jun 22 '16 at 08:08
  • 1
    I think that is exactly what Nikko meant: The internal implementation of smart pointers call new/delete for you, so you virtually never write 'new' or 'delete' yourself (but it's still there hidden inside the template implementation of shared_ptr). – Adrian Maire Jun 20 '23 at 15:16
4

Making programmers' lives easier and preventing memory leaks is an important advantage of garbage collection but it's not an only one. Another is preventing memory fragmentation. In C++, once you allocate an object using the new keyword, it stays in a fixed position in memory. This means that, as the application runs, you end up having gaps of free memory in between allocated objects. So allocating memory in C++ must by necessity be a more complicated process, as the operating system needs to be able to find unallocated blocks of given size that fit between the gaps.

Garbage collection takes care of it by taking all objects that aren't deleted and shifting them in memory so that they form a continuous block. If you experience that garbage collection takes some time, that's probably because of this process, not due to memory deallocation itself. The benefit of it is that when it comes to memory allocation, it's almost as straightforward as shifting a pointer to the end of the stack.

So in C++ deleting objects is fast but creating them can be slow. In Java creating objects takes no time at all but you need to do some housekeeping once in a while.

kamilk
  • 428
  • 2
  • 9
  • 4
    Yes, allocating from the free store is slower in C++ than Java. Luckily, it's far less frequent, and you can easily use special-purpose allocators at your own discretion where you have any unusual patterns. Also, all resources are equal in C++, Java has the special cases. – Deduplicator Jun 17 '16 at 11:36
  • 3
    In 20 years of coding in C++, I've never seen memory fragmentation become a problem. Modern OSes with virtual memory management on processors with multiple cache levels have largely eliminated this as an issue. – Rob K Jun 20 '16 at 18:41
  • @RobK - "Modern OSes with virtual memory management on processors with multiple cache levels have largely eliminated [fragmentation] as an issue" - no they haven't. You might not notice it, but it still happens, it still causes (1) wasted memory and (2) less efficient use of caches, and the only viable solutions are either copying GC or *very* careful design of manual memory management to ensure it doesn't happen. – Jules Jun 21 '16 at 06:51
3

Java's main promises were

  1. Understandable C like syntax
  2. Write one run everywhere
  3. We make your work easier - we even take care of garbage.

Seems like Java guarantees you that garbage will be disposed (not necessarily in an efficient way). If you use C/C++ you have both freedom and responsibility. You can do it better than Java's GC, or you can be much worse (skip delete all together and have memory leak issues).

If you need code that "meets certain quality standards" and to optimize "price/quality ratio" use Java. If you are ready to invest extra resources (time of your experts) to improve performance of mission critical application - use C.

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
Ju Shua
  • 713
  • 1
  • 5
  • 8
  • Well, all the promises can be can be seen as broken, easily. – Deduplicator Jun 17 '16 at 11:38
  • Except that the **only** garbage Java will collect is dynamically allocated memory. It does nothing about any other dynamically allocated resource. – Rob K Jun 20 '16 at 19:44
  • @RobK - that's not strictly true. The use of object finalizers *can* handle deallocation of other resources. It's widely discouraged because in most cases you don't want that (unlike memory, most other resource types are much more constrained or even unique and therefore precise deallocation is critical), but it can be done. Java also has the try-with-resources statement that can be used to automate management of other resources (providing similar benefits to RAII). – Jules Jun 21 '16 at 07:09
  • @Jules: One key difference is that in C++ objects self-destruct. In Java Closeable objects cannot close themselves. A Java developer writing "new Something(...)" or (worse) Something s = SomeFactory.create(...) must check to see if Something is Closeable. Changing a class from not Closeable to Closeable is the worst kind of breaking change, and so practically can never be done. Not so bad at the class level, but a serious problem when one is defining a Java interface. – kevin cline Dec 14 '18 at 02:04
2

The big difference that garbage collection makes isn't that you don't have to explicitly delete objects. The much bigger difference is that you don't have to copy objects.

This has effects that become pervasive in designing programs and interfaces in general. Let me give just one tiny example to show how far-reaching this is.

In Java, when you pop something from a stack, the value being popped is returned, so you get code like this:

WhateverType value = myStack.Pop();

In Java, this is exception safe, because all we're really doing is copying a reference to an object, which is guaranteed to happen without an exception. The same is not true in C++ though. In C++, returning a value means (or at least can mean) copying that value, and with some types that could throw an exception. If the exception is thrown after the item has been removed from the stack, but before the copy gets to the receiver, the item has leaked. To preven that, C++'s stack uses a somewhat clumsier approach where retrieving the top item and removing the top item are two separate operations:

WhateverType value = myStack.top();
myStack.pop();

If the first statement throws an exception, the second won't be executed, so if an exception is thrown in copying, the item remains on the stack as if nothing had happened at all.

The obvious problem is that this is simply clumsy and (to people who haven't used it) unexpected.

The same is true in many other parts of C++: especially in generic code, exception safety pervades many parts of design--and this is due in large part to the fact that most at least potentially involve copying objects (which might throw), where Java would just create new references to existing objects (which can't throw, so we don't have to worry about exceptions).

As far as a simple script to insert delete where needed: if you can statically determine when to delete items based on the structure of the source code, it probably shouldn't have been using new and delete in the first place.

Let me give you an example of a program for which this almost certainly wouldn't be possible: a system for placing, tracking, billing (etc.) phone calls. When you dial your phone, it creates a "call" object. The call object keeps track of who you called, how long you talk to them, etc., to add appropriate records to the billing logs. The call object monitors the hardware status, so when you hang up, it destroys itself (using the widely discussed delete this;). Only, it's not really as trivial as "when you hang up". For example, you might initiate a conference call, connect two people, and hang up--but the call continues between those two parties even after you hang up (but the billing may change).

Jerry Coffin
  • 44,385
  • 5
  • 89
  • 162
  • "If the exception is thrown after the item has been removed from the stack, but before the copy gets to the receiver, the item has leaked" do you have any references for this? Because this sounds extremely weird although I'm no c++ expert. – Esben Skov Pedersen Jun 20 '16 at 19:11
  • @EsbenSkovPedersen: [GoTW #8](http://www.gotw.ca/gotw/008.htm) would be a reasonable starting point. If you have access to it, *Exceptional C++* has quite a bit more. Note that both expect at least *some* pre-existing knowledge of C++. – Jerry Coffin Jun 20 '16 at 19:30
  • That seems straight forward enough. It is actually this sentence which confused me "In C++, returning a value means (or at least can mean) copying that value, and with some types that could throw an exception" Is this copy on the heap or stack? – Esben Skov Pedersen Jun 20 '16 at 19:42
  • You absolutely **don't** have to copy objects in C++, but you can if you want to, unlike the garbage collected languages (Java, C#) which make it a PITA to copy an object when you want to. 90% of the objects I create should be destroyed and their resources freed when they go out of scope. To force all objects into dynamic storage because 10% of them need to be seem foolish at best. – Rob K Jun 20 '16 at 19:53
  • @RobK: In theory you're right: copying in C++ isn't really necessary (but in practice...it's hard enough to avoid that almost nobody ever really bothers, though it is pretty routine to work at minimizing copying of large objects and such). – Jerry Coffin Jun 20 '16 at 20:12
  • @EsbenSkovPedersen: On the stack (though it's pretty common for the objects on the stack to own resources on the heap). Also note that compilers pretty routinely elide most of the actual copying involved, but when you're designing code, you generally have to assume that at least some copying could happen. – Jerry Coffin Jun 20 '16 at 20:14
  • 2
    This isn't really a difference caused by the use of garbage collection, though, but by Java's simplified "all objects are references" philosophy. Consider C# as a counter-example: it *is* a garbage-collected language, but also has value objects ("structs" in the local terminology, which are different from C++ structs) that have copying semantics. C# avoids the problem by (1) having a clear separation between reference and value types and (2) always copying value types using byte-for-byte copying, not user code, thus preventing exceptions during copying. – Jules Jun 21 '16 at 07:03
2

Something that I don't think has been mentioned here is that there are efficiencies that come from garbage collection. In the most commonly used Java collectors, the main place that objects are allocated is an area reserved for a copying collector. When things start, this space is empty. As objects are created, they are allocated next to each other in the big open space until it can't allocate one in the remaining contiguous space. The GC kicks in and looks for any objects in this space that are not dead. It copies the live objects to another area and puts them together (i.e. no fragmentation.) The old space is considered clean. It then continues allocating objects tightly together and repeats this process as needed.

There are two benefits to this. The first is that no time is spent deleting the unused objects. Once the live objects are copied, the slate is considered clean and the dead objects are simply forgotten. In many applications, most objects don't live very long so the cost of copying the live set is cheap compared to savings gained by not having to worry about the dead set.

The second benefit is that when a new object is allocated, there's no need to search for a contiguous area. The VM always knows where the next object is going to be placed (caveat: simplified ignoring concurrency.)

This kind of collection and allocation is very fast. From an overall throughput perspective, it's hard to beat for many scenarios. The problem is some objects are going to live for longer than you want to keep copying them around and ultimately that means the collector may need to pause for a significant amount of time every once in a while and when that will happen can be unpredictable. Depending on the length of the pause and the kind of application, this may or may not be a problem. There is at least one pauseless collector. I expect there is some tradeoff of lower efficiency in order to get the pauseless nature but one of the people who founded that company (Gil Tene) is an uber-expert at GC and his presentations are a great source of information about GC.

JimmyJames
  • 24,682
  • 2
  • 50
  • 92
1

Or is it more like "disposing objects in C++ is really tricky - I spend 20% of my time on it and yet, memory leaks are a common place"?

In my personal experience in C++ and even C, memory leaks have never been a huge struggle to avoid. With sane testing procedure and Valgrind, for example, any physical leak caused by a call to operator new/malloc without a corresponding delete/free is often quickly detected and fixed. To be fair some large C or old school C++ codebases might very feasibly have some obscure edge cases which might physically leak some bytes of memory here and there as a result of not deleting/freeing in that edge case which flew under the radar of testing.

Yet as far as practical observations, the leakiest applications I encounter (as in ones that consume more and more memory the longer you run them, even though the amount of data we're working with is not growing) are typically not written in C or C++. I don't find things like the Linux Kernel or Unreal Engine or even the native code used to implement Java among the list of leaky software I encounter.

The most prominent kind of leaky software I tend to encounter are things like Flash applets, like Flash games, even though they use garbage collection. And that is not a fair comparison if one was to deduce anything from this since many Flash applications are written by budding developers who likely lack sound engineering principles and testing procedure (and likewise I'm certain there are skilled professionals out there working with GC who do not struggle with leaky software), but I would have a lot to say to anyone who thinks GC prevents leaky software from being written.

Dangling Pointers

Now coming from my particular domain, experience, and as one mostly using C and C++ (and I expect the benefits of GC will vary depending on our experiences and needs), the most immediate thing GC solves for me is not practical memory leak issues but dangling pointer access, and that could literally be a lifesaver in mission-critical scenarios.

Unfortunately in many of the cases where GC solves what would otherwise be a dangling pointer access, it replaces the same sort of programmer mistake with a logical memory leak.

If you imagine that Flash game written by a budding coder, he might store references to game elements in multiple data structures, making them share ownership of these game resources. Unfortunately, let's say he makes a mistake where he forgot to remove the game elements from one of the data structures upon advancing to the next stage, preventing them from being freed until the whole game is shut down. However, the game still appears to work fine because the elements aren't being drawn or affect user interaction. Nevertheless, the game is starting to use more and more memory while the frame rates work themselves to a slide show, while hidden processing is still looping through this hidden collection of elements in the game (which has now become explosive in size). This is the sort of issue I encounter frequently in such Flash games.

  • I have encountered people saying this does not count as a "memory leak" because the memory is still being freed upon closing the application, and instead might be called a 'space leak' or something to this effect. While such a distinction might be useful to identify and talk about problems, I do not find such distinctions so useful in this context if we're talking about it like it isn't as problematic as a "memory leak" when we're dealing the practical goal of ensuring out software does not hog up ridiculous amounts of memory the longer we run it (unless we're talking obscure operating systems that don't free a process's memory when it is terminated). It is not as though it would be of any comfort to users so upset that the software is constantly working towards unusability to correct their use of terminology here.

Now let's say the same budding developer wrote the game in C++. In that case there would typically be only one central data structure in the game which "owns" the memory while others point to that memory. If he makes the same sort of mistake, chances are that, upon advancing to the next stage, the game will crash as a result of accessing dangling pointers (or worse, do something other than crash).

This is the most immediate kind of trade-off I tend to encounter in my domain most often between GC and no GC. And I actually don't care for GC very much in my domain, which isn't very mission-critical, because the biggest struggles I ever had with leaky software involved haphazard use of GC in a former team causing the sort of leaks described above.

In my particular domain I actually prefer the software crashing or glitching out in many cases because that's at least much easier to detect than trying to trace down why the software is mysteriously consuming explosive amounts of memory after running it for half an hour while all of our unit and integration tests pass with no complaint (not even from Valgrind, since the memory is being freed by GC upon shutdown). Yet that's not a slam on GC on my part or an attempt to say that it's useless or anything like that, but it hasn't been any sort of silver bullet, not even close, in the teams I worked with against leaky software (to the contrary I had the opposite experience with that one codebase utilizing GC being the leakiest I ever encountered). To be fair many members on that team didn't even know what weak references were, so they were sharing ownership of everything left and right and frequently making that sort of mistake I described above with the budding game developer.

Shared Ownership and Psychology

The problem I find with garbage collection that can make it so prone to "memory leaks" (and I'll insist on calling it as such as the 'space leak' behaves the exact same way from user-end perspective) in the hands of those who do not use it with care relates to "human tendencies" to some degree in my experience. The problem with that team and the leakiest codebase I ever encountered was that they seemed to be under the impression that GC would allow them to stop thinking about who owns resources.

In our case we had so many objects referencing each other. Models would reference materials along with the material library and shader system. Materials would reference textures along with the texture library and certain shaders. Cameras would store references to all sorts of scene entities that should be excluded from rendering. The list seemed to go on indefinitely. That made just about any hefty resource in the system being owned and extended in lifetime in 10+ other places in the application state at once, and that was very, very prone to human error of a kind which would translate to a leak (and not a minor one, I'm talking gigabytes in minutes with serious usability issues). Conceptually all these resources did not need to be shared in ownership, they all conceptually had one owner, but the use of GC here tempted developers to share ownership all over the place instead of properly thinking about the distinction between, say, strong references and weak references and phantom references.

If we stop thinking about who owns what memory, and happily just store lifetime-extending references to objects all over the place without thinking about this, then the software will not crash as a result of dangling pointers but it will almost certainly, under such a careless mindset, start leaking memory like crazy in ways that are very difficult to trace down and will elude tests.

If there's one practical benefit to the dangling pointer in my domain, it is that it causes very nasty glitches and crashes. And that tends to at least give the developers the incentive, if they want to ship something reliable, to start thinking about resource management and doing the proper things needed to remove all additional references/pointers to an object which is no longer conceptually needed.

Application Resource Management

Proper resource management is the name of the game if we're talking about avoiding leaks in long-lived applications with persistent state being stored where leakiness would pose serious frame rate and usability issues. And correctly managing resources here is no less difficult with or without GC. The work is no less manual either way to remove the appropriate references to objects no longer needed whether they are pointers or lifetime-extending references.

That's the challenge in my domain, not forgetting to delete what we new (unless we're talking amateur hour with shoddy testing, practices, and tools). And it requires thought and care whether we're using GC or not.

Multithreading

The one other issue I find very useful with GC, if it could be used very cautiously in my domain, is to simplify resource management in multithreading contexts. If we are careful not to store lifetime-extending references to resources in more than one place in the application state, then the lifetime-extending nature of GC references could be extremely useful as a way for threads to temporarily extend a resource being accessed to extend its lifetime for just a short duration as needed for the thread to finish processing it.

I do think very careful use of GC this way could yield a very correct, software that isn't leaky, while simultaneously simplifying multithreading.

There are ways around this though absent GC. In my case we unify the software's scene entity representation, with threads that temporarily cause scene resources to be extended for brief durations in a rather generalized fashion prior to a cleanup phase. This might smell a bit like GC but the difference is that there is no "shared ownership" involved, only a uniform scene processing design in threads which defer destruction of said resources. Still it would be much simpler to just rely on GC here if it could be used very carefully with conscientious developers, careful to use weak references in the relevant persistent areas, for such multithreading cases.

C++

Finally:

In C++ I have to call delete to dispose a created object at the end of it's life cycle.

In Modern C++, this is generally not something you should be doing manually. It's not even so much about forgetting to do it. When you involve exception handling into the picture, then even if you wrote a corresponding delete below some call to new, something could throw in the middle and never reach the delete call if you don't rely on automated destructor calls inserted by the compiler to do this for you.

With C++ you practically need to, unless you're working in like an embedded context with exceptions off and special libraries which are deliberately programmed not to throw, avoid such manual resource cleanup (that includes avoiding manual calls to unlock a mutex outside of a dtor, e.g., and not just memory deallocation). Exception-handling pretty much demands it, so all resource cleanup should be automated through destructors for the most part.