Conference Paper A Concurrency Control Protocol That Selects Accessible Replicated Pages to Avoid Latch Collisions for B-Trees in Manycore Environments

吉原, 朋宏  ,  Yoshihara, Tomohiro  ,  横田, 治夫  ,  Yokota, Haruo

In recent years, microprocessor vendors aiming for dramatic performance improvement have introduced manycore processors with over 100 cores on a single chip. To take advantage of this in database and storage systems, it is necessary for B-trees and their concurrency control to reduce the number of latch collisions and interactions among the cores. Concurrency control methods such as physiological partitioning (PLP), which assigns cores to partitions in a value–range partition, have been studied. These methods perform effectively for nearly static and uniform workloads using multicore processors. However, their performance deteriorates significantly if there is major restructuring of B-trees against skew and for changing workloads. The manycore approach has a high likelihood of causing workload skew, given the lower power of each core, with an accompanying severe degradation in performance. This issue is critical for database and storage systems, which demand consistent high performance even against dynamic workloads. To address this problem, we propose an efficient new concurrency control method suitable for manycore processor platforms, called the selecting accessible replicated pages (SARP) B-tree concurrency control method. SARP achieves a consistent high performance with robustness against workload skew by distributing the workload to many cores on manycore processors, while reducing latch collisions and interactions among the cores. By applying parallel B-trees to shared-everything environments, SARP selects execution cores and access paths that distribute workloads widely to cores with appropriate processor characteristics. Experimental results using a Linux server with an Intel Xeon Phi manycore processor demonstrated that the proposed system could achieve a throughput of 44 times that for PLP in the maximum-skew case and could maintain the throughput at 66% of a throughput for uniform workloads.

Number of accesses :  

Other information