Both of the programming languages you mention (as well as many other programming languages) provide Automatic Memory Management. What this means is that the programming language is responsible for allocating and de-allocating memory, managing free memory, and so on.
So, that solves the problem for the first kind of resource you mentioned: memory. Before you run out of memory, the programming language will de-allocate some unreachable objects (assuming there are any), thus freeing memory again.
For other kinds of resources, there are essentially three different strategies which are employed, and in fact, many programming languages employ at least two of them.
The first strategy is library-based and relies on a programming language feature typically called finalizers or destructors. Finalizers are a piece of code that gets executed when an object is de-allocated. Usually, programming languages with automatic memory management will not allow you to call the OS kernel directly; rather, there will be some sort of proxy object which wraps and represents resources, such as IO
objects representing file descriptors, Socket
objects representing network sockets, and so on.
The library developers will make sure that any object representing a resource will have a finalizer which releases that resource. Therefore, whenever an object representing a resource gets de-allocated, the corresponding resource gets released.
The main problem with this strategy is that most programming languages with automatic memory management do not make any guarantees about when memory will be de-allocated or even if it will be de-allocated at all. Usually, it is more efficient to "waste" a bit of memory and batch the de-allocation operations together at a point where the system is otherwise idle. Therefore, on a system with a lot of memory but only a small number of file descriptors, for example, it would be possible that you run out of file descriptors before you run out of memory (which would trigger a de-allocation which would trigger execution of the finalizers which would then release file descriptors). For that reason, this strategy is typically only employed as a fallback and one of the two other strategies below is also used.
However, there are some programming languages where memory is guaranteed to be de-allocated as soon as it is no longer used, e.g. Swift.
The second strategy is also library-based, and is to provide helper methods that make it easy to write code that correctly handles the situation described in your question. Typically, these helper methods require programming language support for first-class subroutines and higher-order subroutines, i.e. subroutines that can be passed as arguments and subroutines that can take subroutines as arguments. For example, in Ruby, there is the IO::open
method, whose implementation looks a little bit like this (massively simplified):
class IO
def self.open(file_descriptor)
file = new(file_descriptor)
yield file # call the supplied block with `file` as argument
ensure # regardless of whether or not an exception was raised
file.close # close the file descriptor
end
end
And you would use it like this:
IO.open(some_file_descriptor) do |f|
f.puts("Hello")
something_which_might_raise_an_exception
f.puts("World")
end
Regardless of whether the IO::open
method was exited because the block completed normally or because something in the block raised an exception, the ensure
part of the method will be executed and thus the file descriptor will be closed.
You could do the same in Python or in Java:
class IO {
public static void open(int fileDescriptor, Consumer<IO> action) {
try {
var file = new IO(fileDescriptor);
action(file);
} finally {
file.close();
}
}
}
And you would use it like this:
IO.open(
f -> {
f.println("Hello");
somethingWhichMightThrowAnException();
f.println("World");
}
);
However, the Python and Java designers decided not to include such helper methods in the standard library.
The third strategy is to add specialized language features that essentially do the same as the above. Python has the with
statement which works together with the Context Manager protocol, Java has the try
-with-resources statement which works together with the AutoCloseable
interface, and C# has the using
statement which works together with the IDisposable
interface and the IAsyncDisposable
interface.
Using these looks a bit like this:
with File.open("hello.txt") as f:
f.write("Hello")
something_which_might_raise_an_exception()
f.write("World")
Both of these latter strategies have the problem that there is nothing which forces the programmer to use the feature. For example, in Ruby, there is a second overload of IO::open
which does not take a block but instead returns an IO
object wrapping an open file descriptor. There is nothing stopping me from never calling close
on that object. If and when it gets automatically de-allocated, its finalizer will release the file descriptor, but until then, the file descriptor is effectively leaked.
However, that is no different in C++: If I write my own File
class and don't call close
in the destructor, there's nothing in the language which stops me.
A completely different approach can be taken in programming languages with a powerful and expressive type system. In such languages, it is possible to express the lifetime rules of resources inside the type system and thus ensure that code which can leak resources gets rejected by the type checker. I believe Idris employs this strategy, for example.
In some languages, there is a separate Effect System aside from the type system. This can also be used to manage resources.
Last but not least, there are languages like Smalltalk and Common Lisp, where exceptions are resumable, i.e. they do not unwind the stack in the first place. You can fix the problem and continue at the place where the exception occurred.