Dynamic arrays, for instance std::vector in c++ are implemented based on the idea that if their size exceeds their capacity, the underlying array is reallocated to one that is twice the current capacity.
My question is, would this even be necessary given that at least in Linux, pages are allocated to a process on demand?
For instance, with a simple program like this:
#include <stdio.h>
#include <stdlib.h>
int main()
{
long size = (long)(64 * 4096 * 64) * 8;
long *arr = malloc(size);
for(long i=0; i < (size/32); i++)
{
arr[i] = i;
printf("%d\n", arr[i]);
}
printf("--->%p\n", arr);
while(1)
;
}
if size were equal to 64 * 4096 * 64, we can see via /proc/[PID]/maps that the heap section is about 4M(despite the malloc being for 16M) and with the size given in the program it is about 32M(despite the malloc being for 128M), illustrating on demand paging.
Considering that this implementation is probably not used widely, why is that? I figured one explanation would be the assumption of the system having on demand paging at all, which simply might not be the case, thus reducing portability, but what am I missing otherwise?