2

In Big O notation, allocate an array of N element is defined by O(1) or O(n) ? For example in C#, if I allocate an array like this :

int[] a = new int[10]

When I display this array, I have :

{0,0,0,0,0,0,0,0,0,0}
csblo
  • 239
  • 3
  • 9
  • possible duplicate of [What is O(…) and how do I calculate it?](http://programmers.stackexchange.com/questions/132331/what-is-o-and-how-do-i-calculate-it) – gnat Nov 12 '14 at 10:05
  • 1
    It really depends on the allocating. For example, if it is C or similar the OS just needs to find enough memory. If it needs to be continuous, it is O(n) with n being the amount of gaps in memory, regardless of the length of your array. If it can manage with non-continuous memory, it is O(n) with n as the size of the memory you want to allocate. In case of non-native code, it depends completely on the framework/VM. – 11684 Nov 12 '14 at 10:13
  • Your comment is useful, however I didn't know that big o notation have dependence to the langage/framework or OS. – csblo Nov 12 '14 at 10:15
  • 2
    @Niels It depends on the *model of computation* if `new[]` is considered a primitive operation in your model of computation, or on the assumed implementation of `new[]` if it's not. Adding two integers may take O(1) time in the RAM model, or O(d) on a Turing machine (where d = the length of the numbers). –  Nov 12 '14 at 10:22
  • 1
    @11684 While I agree in principle (see my previous comment), those C examples I can't get behind. Nothing any sensible person would call an array is non-contiguous. And speaking of O(gaps) complexity is very misleading: There are many allocator designs, many with quite different performance characteristics. Plus, O(gaps) isn't useful because the number of gaps is completely arbitrary and depends in complicated ways on the internals of the allocator and the global allocation behavior of the program. –  Nov 12 '14 at 10:26
  • Thanks for your comments, I better understand some things on big o notation. – csblo Nov 12 '14 at 10:36
  • @delnan Sorry, I could have stated more clearly that the point of my comment was that the complexity of allocation isn't useful, exactly because of the reasons you give. I only wanted to illustrate the question was very framework and OS dependent. Of course an array is continuous _to the owning process_ but IIRC memory address remapping hardware exists to make it possible to allocate a non-continuous block while seemingly giving a continuous block to the requesting process. – 11684 Nov 12 '14 at 16:27
  • Non-continuous block is of course a contradiction. But I hope you know what I mean. – 11684 Nov 12 '14 at 18:56

1 Answers1

4

Normally, size of an array has no effect on complexity of allocation itself. AFAIK, an array is internally a pointer to some address where the array begins and hidden fields for element size and element count. So it would be O(1).
At least this is the case in Object Pascal, though I am not sure for C# or other languages.

But finding the chunk of memory which fits for your array has some higher complexity which depends on the size at some point but is more dependent to how the active memory manager works: Time complexity of memory allocation.

  • 2
    Note that `new[]` (in C# and also in C++) not only allocated but also initializes. –  Nov 12 '14 at 10:27
  • Indeed, I omitted this fact – csblo Nov 12 '14 at 10:31
  • If the system keeps an arena of pre-initialized memory around, then allocating would *normally* amount to incrementing a pointer, so the amortized cost would still be constant, not linear. – Kilian Foth Nov 12 '14 at 10:32
  • @KilianFoth Yes, it's certainly possible to get initialization at no extra cost during allocation (though it probably makes other operation, e.g. deallocation, more expensive). But in any case it is a facet that must be considered. –  Nov 12 '14 at 10:35