What are page replacement algorithms

I have a huge file that I memory map. This file is larger than the physical memory. Some pages in the file contain index information, they are accessed much more frequently than pages that contain data. I have a program written in c# that I run on Windows 8 to process this file. This program produces lots of hard page faults per second (about 3000 per second).

Index data is about 2Mb. File size 16 Gb. Physical memory 8Gb. Every time I want to locate a data record in the file I need to go through the index.

Accessing index information (frequently accessed pages) does produce the same number of hard page faults than accessing record data (rarely accessed pages)?

Which is the Page Replacement Algorithm used on Windows?

EDIT: I found these pages:

The algorithm for selecting victim frames depends on the type of processor:

On single processor 80x86 systems, a variation of the clock ( second chance ) algorithm is used.

On Alpha and multiprocessor systems, clearing the reference bits may require invalidating entries in the TLB on other processors, which is an expensive operation. In this case Windows uses a variation of FIFO.

So on multicore processors a variation of FIFO is used on Windows which is very unfortunate as the page access frequency is not the factor to take into account.

EDIT:

I'm trying to implement a fast disk hash table. I'm benchmaching it and comparing it with SQLite . I'm inserting key values pairs, keys are 16 bytes long, values are 100 bytes long.

This is what I get for now: