Huge pages as described at :ref:`hugetlbpage` are typically
preallocated for application use. These huge pages are instantiated in a
task's address space at page fault time if the VMA indicates huge pages are
to be used. If no huge page exists at page fault time, the task is sent
a SIGBUS and often dies an unhappy death. Shortly after huge page support
was added, it was determined that it would be better to detect a shortage
of huge pages at mmap() time. The idea is that if there were not enough
huge pages to cover the mapping, the mmap() would fail. This was first
done with a simple check in the code at mmap() time to determine if there
were enough free huge pages to cover the mapping. Like most things in the
kernel, the code has evolved over time. However, the basic idea was to
'reserve' huge pages at mmap() time to ensure that huge pages would be
available for page faults in that mapping. The description below attempts to
describe how huge page reserve processing is done in the v4.10 kernel.
This description is primarily targeted at kernel developers who are modifying
This is a global (per-hstate) count of reserved huge pages. Reserved
huge pages are only available to the task which reserved them.
Therefore, the number of huge pages generally available is computed
as (``free_huge_pages - resv_huge_pages``).
A reserve map is described by the structure::
struct list_head regions;
struct list_head region_cache;
There is one reserve map for each huge page mapping in the system.
The regions list within the resv_map describes the regions within
the mapping. A region is described as::
The 'from' and 'to' fields of the file region structure are huge page
indices into the mapping. Depending on the type of mapping, a