In any multiprogrammed virtual memory system, each executing process must be given some allocation of the main memory space for its pages. It has long been known that a fixed allocation to each process is likely to yield a high number of page faults. Therefore, a system must dynamically adapt the allocations for each process in response to reference behavior.
There are a number of existing policies to address this problem: Global LRU, Working Set, Page Fault Frequency, and others. However, these policies were developed decades ago, where the hardware, workloads, and goals for a virtual memory system were substantially different. These policies don't apply well to modern systems and workloads in a number of ways: They don't account for interactive processes, they don't respond to priorities or proportional shares, and they don't apply to systems without long-term schedulers. These old policies are at worst inapplicable (WS), or yield undesirable, unpredictable, and uncontrollable behaviors (gLRU, PFF).
We are developing a new model of process behavior that accounts for this expansion in the reference behavior of modern workloads. This model should handle priorities and proportional shares, sleeping processes, and larger-scale localities better than the existing policies, all without special cases and ad-hoc patches. The overall goal is to provide a kind of control over the allocation of memory as a resource given to processes.
We are actively persuing this topic by experimenting with DVMM policies both in simulation and on real kernels. Our experimentation borrows from and builds upon work on whole-system trace collection and on in-kernel recency information gathering. We hope to be able to post code, scripts, and inputs for our experiments within a few months, as well as a paper (in development) on the model and its implications.