Just generally immutable types created in languages that don't revolve around immutability will tend to cost more developer time to create as well as potentially use if they require some "builder" type of object to express desired changes (this does not mean that the overall work will be more, but there is a cost upfront in these cases). Also regardless of whether the language makes it really easy to create immutable types or not, it'll tend to always require some processing and memory overhead for non-trivial data types.
Making Functions Devoid of Side Effects
If you are working in languages that don't revolve around immutability, then I think the pragmatic approach is not to seek to make every single data type immutable. A potentially far more productive mindset which gives you many of the same benefits is to focus on maximizing the number of functions in your system that cause zero side effects.
As a simple example, if you have a function which causes a side effect like this:
// Make 'x' the absolute value of itself.
void make_abs(int& x);
Then we don't need an immutable integer data type that forbids operators like post-initialization assignment to make that function avoid side effects. We can simply do this:
// Returns the absolute value of 'x'.
int abs(int x);
Now the function doesn't mess with x
or anything outside of its scope, and in this trivial case we might have even shaved some cycles by avoiding any overhead associated with the indirection/aliasing. At the very least the second version shouldn't be more computationally expensive than the first.
Things That Are Expensive to Copy in Full
Of course most cases aren't this trivial if we want to avoid making a function cause side effects. A complex real-world use case might be more like this:
// Transforms the vertices of the specified mesh by
// the specified transformation matrix.
void transform(Mesh& mesh, Matrix4f matrix);
At which point the mesh might require a couple hundred megabytes of memory with over a hundred thousand polygons, even more vertices and edges, multiple texture maps, morph targets, etc. It'd be really expensive to copy that whole mesh just to make this transform
function free of side effects, like so:
// Returns a new version of the mesh whose vertices been
// transformed by the specified transformation matrix.
Mesh transform(Mesh mesh, Matrix4f matrix);
And it's in these cases where copying something in its entirety would normally be an epic overhead where I've found it useful to turn Mesh
into a persistent data structure and an immutable type with the analogical "builder" to create modified versions of it so that it can simply shallow copy and reference count parts which aren't unique. It's all with the focus of being able to write mesh functions which are free of side effects.
Persistent Data Structures
And in these cases where copying everything is so incredibly expensive, I found the effort of designing an immutable Mesh
to really pay off even though it had a slightly steep cost upfront, because it didn't just simplify thread safety. It also simplified non-destructive editing (allowing the user to layer mesh operations without modifying his original copy), undo systems (now the undo system can just store an immutable copy of the mesh prior to the changes made by an operation without blowing up memory use), and exception-safety (now if an exception occurs in the above function, the function doesn't have to roll back and undo all of its side effects since it didn't cause any to begin with).
I can confidently say in these cases that the time required to make these hefty data structures immutable saved more time than it cost, since I've compared the maintenance costs of these new designs against former ones which revolved around mutability and functions causing side effects, and the former mutable designs cost far more time and were far more prone to human error, especially in areas that are really tempting for developers to neglect during crunch time, like exception-safety.
So I do think immutable data types really pay off in these cases, but not everything has to be made immutable in order to make the majority of the functions in your system free of side effects. Many things are cheap enough to just copy in full. Also many real-world applications will need to cause some side effects here and there (at the very least like saving a file), but typically there are far more functions which could be devoid of side effects.
The point of having some immutable data types to me is to make sure we can write the maximum number of functions to be free of side effects without incurring epic overhead in the form of deep copying massive data structures left and right in full when only small portions of them need to be modified. Having persistent data structures around in those cases then ends up becoming an optimization detail to allow us to write our functions to be free of side effects without paying an epic cost to doing so.
Immutable Overhead
Now conceptually the mutable versions will always have an edge in efficiency. There is always that computational overhead associated with immutable data structures. But I found it a worthy exchange in the cases I described above, and you can focus on making the overhead sufficiently minimal in nature. I prefer that type of approach where correctness becomes easy and optimization becomes harder rather than optimization being easier but correctness becoming harder. It's not nearly as demoralizing to have code that functions perfectly correctly in need of some more tune ups over code that doesn't function correctly in the first place no matter how quickly it achieves its incorrect results.