70

Let's consider a fictional program that builds a linked list in the heap, and at the end of the program there is a loop that frees all the nodes, and then exits. For this case let's say the linked list is just 500K of memory, and no special space managing is required.

  • Is that a waste of time, because the OS will do that anyway?
  • Will there be a different behavior later?
  • Is that different according to the OS version?

I'm mainly interested in UNIX based systems, but any information will be appreciated. I had today my first lesson in OS course and I'm wondering about that now.

Edit: Since a lot of people here are concerned about side effects, and general 'good programming practice' and so. You are right! I agree 100% with your statements. But my question is only hypothetical, I want to know how the OS manages this. So please leave out things like 'finding other bugs when freeing all memory'.

Ramzi Kahil
  • 1,089
  • 1
  • 10
  • 20
  • 25
    I had a program once that re-sorted a linked list 7 times, and then tried to deallocate it. The sorting took 2 hours, but the deallocation took 4 hours because all the sorting has blown any locality of reference. I sped up my program 200% by leaking the memory :D – MSalters Mar 19 '12 at 15:42
  • 1
    @MSalters: Assuming C++ or C, you could also have allocated from a (per-list) pool. That would allow you to deallocate all of the nodes in one go, regardless of how they might refer to one another. – Jon Purdy Mar 19 '12 at 18:58
  • 4
    Raymond Chen [wrote about this recently](http://blogs.msdn.com/b/oldnewthing/archive/2012/01/05/10253268.aspx). Pretty sure this is Windows specific, but still a relevant. – Roman Mar 19 '12 at 19:50
  • 4
    @JonPurdy: I considered that, then decided that writing a pool allocator was a lot harder than calling `TerminateProcess`. Process memory already is a pool. – MSalters Mar 20 '12 at 08:31
  • @MSalters: Oh, absolutely. But if you *weren’t* killing the process and needed to deallocate such a list, a pool would be the way to go. – Jon Purdy Mar 20 '12 at 12:12
  • It depends on what the resources are. If it is just system memory, then yes, it is (*technically*) a waste of time. If you are cleaning up stuff like files or w/e, then the OS won't help you in all cases. In the file case, the OS will close your file handles, but it won't automatically ensure your files won't have corrupt data. Only your program's logic can do that. – Thomas Eding Jan 19 '15 at 23:02

16 Answers16

70

My main problem with your approach is that a leak detection tool (like Valgrind) will report it and you will start ignoring it. Then, some day a real leak may show up, and you'll never notice it because of all the noise.

Nemanja Trifunovic
  • 6,815
  • 1
  • 26
  • 34
  • I guess then it would make sense to free it all manually in debug mode, but skip the freeing in release mode. – Vilx- Mar 19 '12 at 15:31
  • 40
    @Vilx: Any difference in execution between debug and release mode can cause you extremely annoying problems, should something go wrong. – David Thornley Mar 19 '12 at 15:34
  • @DavidThornley - Also true, no argument from me there. I guess it's a tradeoff then - fast program vs safe program. – Vilx- Mar 19 '12 at 15:55
  • But Valgrindis called like this `valgrind --leak-check=yes myprog arg1 arg2`, so Valgrindis the parent process of the tested program. In this case all info is still in the process table and the Valgrindshould clear it from there, and can check it before. – Ramzi Kahil Mar 19 '12 at 16:50
  • 1
    If there is a leak and you never notice it, then it's not a problem. The goal is not to create a perfect program, the goal is to create a program that the end user will like. – Andreas Bonini Mar 19 '12 at 19:58
  • 1
    @fast program vs safe program: does deallocation need so much time? I'd rather think it takes a very small portion of a program's running time. – Giorgio Mar 21 '12 at 19:48
  • @Nemanja Trifunovic You should read [my answer](http://programmers.stackexchange.com/a/141014/25936) below. Turns out that Valgrind is sand-boxing the execution, and your general answer is wrong. Although it seemed that you are right at first. – Ramzi Kahil Mar 22 '12 at 23:25
  • @ThomasBonini There can be memory leaks that make the program noticably worse to use for the user, yet you don't notice it as a developer for a long time. – nog642 Apr 20 '22 at 22:24
42

Once I had to implement an algorithm using deques which were allocated dynamically. I was also wondering whether I needed to deallocate all the allocated data at exit.

I decided to implement the deallocation anyway and found out that the program crashed during deallocation. By analysing the crash, I found an error in the implementation of the main data structure and algorithms (a memory leak).

The lessons I learnt were:

  1. Always implement deallocation for the data you allocate, this makes it more reusable in case you want to embed your code in a larger system.
  2. Deallocation can serve as an additional check that your allocated data is in a consistent state when it is released.

Just my 2 cents.

EDIT

Small clarification: Of course all memory allocated to a process gets released when the process is terminated, so if the only requirement is to release the allocated memory, doing it explicitly is indeed a waste of time.

Giorgio
  • 19,486
  • 16
  • 84
  • 135
18

OK, I just got an Email back from my instructor, with a very good and reasonable answer. Here it is:

Hi Ramzi,

Thanks for the link to the interesting messages thread which you started.

As per our discussion, malloc operates on the process' virtual memory. So, when the process dies, its virtual address space "disappears", and any physical memory mapped to it is freed. Hence, disregarding "good software engineering practices" and such, dynamic memory de-allocation just before exiting a process is indeed a waste of time.

(Needless to say, this is not the case when a single thread terminates but other threads of that process keep executing.)

So according to this:

  • It is a waste of time
  • Will have no later affects
  • Is independent of OS versions.
Ramzi Kahil
  • 1,089
  • 1
  • 10
  • 20
  • 7
    This is very bad advice, as it completely disregards the points made above: that intentionally leaking memory can make it harder to find unexpected memory leaks and correctness problems. – Mason Wheeler Apr 21 '12 at 12:58
  • 3
    Thanks for that, but as stated in the question, I was interested in the concept for an OS point of view. I agree with all the people that stated that this is bad practice. – Ramzi Kahil Apr 21 '12 at 21:28
  • 5
    @Martin If you are only interested in an OS point of view I think it is off topic. Since you got the answer that you were interested in, maybe you should have selected an answer that was more appropriate for the platform. – scarfridge Apr 23 '12 at 19:16
14

Always free your resources. Freeing all your resources is important because it's the only way of being able to effectively detect leaks. For the project I'm working on I've even got wrappers around malloc/realloc/free to insert canaries, track statistics, etc; simply to detect problems sooner.

However, except for leak detection, for resources that are freed at exit (not before) it is a waste of time. The OS must be able to free it (e.g. in case the process crashes, or even just freeing RAM used for the process' code); and the OS should be able to free it faster than you can from user-space because the kernel will do "wholesale" rather than "piecemeal", and there's less transitions between kernel and process involved.

Fortunately you don't have to choose one way or another. If you want to make sure your code is correct (and you do), but you also want the best possible performance, then do something like:

#ifndef FAST_EXIT
    free(stuff);
#endif
Brendan
  • 3,895
  • 21
  • 21
8

The cleanup routine may be useful if you do it periodically to regain space, or to prove that you are able to reclaim exactly as many nodes as you allocated, to verify that there are no memory leaks. But as housekeeping immediately before the process dies, it makes no sense, since the OS regains control of that entire arena of memory at that instant.

Kilian Foth
  • 107,706
  • 45
  • 295
  • 310
  • This sounds like a description of a garbage collection scheme and a list where individual nodes become irrelevant over time whereas the question clearly stated that the list was static and built once at startup and has no expiration policy. – binki Aug 21 '16 at 17:45
4

Will there be a different behavior later?

There is a critical question related to this one that you need to ask:

Will anyone ever use this code ever again for any reason?

If the answer could ever be yes, whether the code is used alone or as a part of a larger system, you have created a monumental memory leaking landmine for the next person to step on.

It is very hard to remember during your coursework that you are learning how to make software that other people are going to use including future-you (rather than just solving the homework problem of the day). This means that you need to abide by certain basic programming behaviors including: give back the resources when you are done with them.

Bob Cross
  • 539
  • 4
  • 10
  • I was about answering the same thing. I once had the case where the code of an executable command-line tool was converted to be callable through functions in a library. And of course that tool relied on the OS to free resources, so the library leaked memory. – philfr Mar 23 '12 at 22:47
  • @philfr, that's exactly what I mean. What's worse is when you are the one who converts your old code to general purpose use and you end up with no one to blame but yourself.... I hate that. Stupid me. – Bob Cross Mar 24 '12 at 00:15
2

If you implement your own data type "linked list" not writing a cleanup routine/deconstructor will give you more headaches in the long run than you might think.

Without cleanup code reusing what you wrote will be much harder since your type cannot be used without taking special care (e.g. never create them in a function or loop, but preferably exclusively in main).

Do yourself and everybody else a favor and just write you list_cleanup or ~List.

Benjamin Bannier
  • 1,212
  • 8
  • 15
1

On multi-user systems, the OS will always free such resources because it would be a security risk if it didn't. Your memory contents might end in someone else's process!

MSalters
  • 8,692
  • 1
  • 20
  • 32
  • 6
    In some languages, freeing resources merely means releasing the pointers to the memory areas... the old data will still be present in memory unless you scrub it first. – Robert Harvey Mar 19 '12 at 15:37
  • 4
    @RobertHarvey: True, but that doesn't matter if you then exit the process. The OS doesn't care which language you used (it's all assembly to the OS anyway) and will scrub the memory regardsless, just to be sure. – MSalters Mar 19 '12 at 15:40
1

Cleanup routines can have bugs just like any other code. A sad sight is a program that does what you want except that it hangs or crashes at exit. General practice is to have code clean up after itself, but memory cleanup at process exit is a case where you don't have to.

Sonia
  • 231
  • 1
  • 3
1

Almost always yes. Free shared-memory semaphores and UDP server resources (the OS doesn't know how to notify connected UDP servers that you're done).

Joshua
  • 1,438
  • 11
  • 11
1

I always clean up resources on normal exit. It's a safety check that my resource allocation/deallocation calls are done right. This check has alerted me to hidden bugs on more than one occasion. It's like balancing the books of a business at the end of the day.

There may be edge cases such as @MSalter described in his comment, but even that would generally start me looking at customized memory management, rather than just letting the OS clean up and hoping for the best.

Charles E. Grant
  • 16,612
  • 1
  • 46
  • 73
0

If you are 100% sure not other process is referencing you linked list and the nodes therein, OK.

100% is a pretty high number, though. Was your list referenced by some persistence code that's being pooled? Does a UI component still have a handle on it? Anything else?

Memory leaks generally don't happen on purpose.

FrustratedWithFormsDesigner
  • 46,105
  • 7
  • 126
  • 176
Matthew Flynn
  • 13,345
  • 2
  • 38
  • 57
  • 2
    Memory isn't shared by multiple *processes*. Well, it can, but you have to go out of your way to even allocate it. –  Mar 19 '12 at 15:25
  • Won't fork() share my memory until copy-on-write? – JBRWilkinson Mar 19 '12 at 16:39
  • 4
    @JBRWilkinson Copy-On-Write is an optimization, and it's only valid because it has no semantic impact. With or without COW, parent and child process do not affect each other's memory after `fork()`. –  Mar 19 '12 at 17:44
0

The problem is that today it's a linked-list node, and tomorrow, it's a resource the OS can't clean up by itself. Is it fast and safe to leave the OS to clean up heap memory? Yes. But does that make it a good idea? Not necessarily.

DeadMG
  • 36,794
  • 8
  • 70
  • 139
-1

Some C++ GUI libraries leak resources like this. This is not a value judgement, but leaking resources and letting the OS reclaim them is a somewhat common occurrence in "real code."

It's always good practice to clean up though, you never know when you'll want to integrate that tool with something else, and then the work is already done (instead of going into the wayback/ remembering machine and remembering all the spots you intentionally leaked).

In one case we actually found a bug because the deallocation was causing a crash because some other memory bug had overwritten the header that delete uses. So, sometimes it can actually help you to find bugs in other areas to free all your resources (you can find an overrun in the previous block).

anon
  • 1,474
  • 8
  • 8
-1

One thing to think about:

If your code is in a DLL, then you should clean up for sure otherwise if you assume that your process exit will free the resources then your DLL will not be reloadable.

sylvanaar
  • 2,295
  • 1
  • 19
  • 26
  • 1
    That was not my question. And if you read the [answer](http://programmers.stackexchange.com/questions/140483/is-it-a-waste-of-time-to-free-resources-before-i-exit-a-process/141014#141014) i got from my instructor, you should see that it has no importance how it stored. The only important thing is the fact that its a process. – Ramzi Kahil Apr 04 '12 at 15:48
  • 1
    Hi Martin. I am just adding what I think is a useful bit of info based upon a problem we just ran into. – sylvanaar Apr 04 '12 at 15:52
  • Well then explain a little more. Why is there a difference if the code is in a DLL or not? the OS is treating it as a process anyway. Isn't it? – Ramzi Kahil Apr 04 '12 at 15:55
  • 1
    Yes. It is. However if you want to later use that DLL in a process that needs to load and unload it it will not be possible since it doesn't clean up its own resources. In our case it was a Tomcat servlet - we couldn't have a servlet go down then back up since the DLL never cleaned up after itself. – sylvanaar Apr 04 '12 at 16:45
-2

Managing memory is the best-practice while taking decision to release it in a context is optimization!

Now the ball is in our court!

sarat
  • 1,121
  • 2
  • 11
  • 19