Show simple item record

dc.contributor.authorPapon, Tarikul Islamen_US
dc.contributor.authorChen, Taishanen_US
dc.contributor.authorAthanassoulis, Manosen_US
dc.date.accessioned2024-10-24T15:17:43Z
dc.date.available2024-10-24T15:17:43Z
dc.date.issued2024-05-30
dc.identifier.citationTarikul Islam Papon, Taishan Chen, Shuo Zhang, and Manos Athanassoulis. 2024. CAVE: Concurrency-Aware Graph Processing on SSDs. Proc. ACM Manag. Data 2, 3, Article 125 (June 2024), 26 pages. https://doi.org/10.1145/3654928
dc.identifier.urihttps://hdl.handle.net/2144/49415
dc.description.abstractLarge-scale graph analytics has become increasingly common in areas like social networks, physical sciences, transportation networks, and recommendation systems. Since many such practical graphs do not fit in main memory, graph analytics performance depends on efficiently utilizing underlying storage devices. These out-of-core graph processing systems employ sharding and sub-graph partitioning to optimize for storage while relying on efficient sequential access of traditional hard disks. However, today's storage is increasingly based on solid-state drives (SSDs) that exhibit high internal parallelism and efficient random accesses. Yet, state-of-the-art graph processing systems do not explicitly exploit those properties, resulting in subpar performance. In this paper, we develop CAVE, the first graph processing engine that optimally exploits underlying SSD-based storage by harnessing the available storage device parallelism via carefully selecting which I/Os to graph data can be issued concurrently. Thus, CAVE traverses multiple paths and processes multiple nodes and edges concurrently, achieving parallelization at a granular level. We identify two key ways to parallelize graph traversal algorithms based on the graph structure and algorithm: intra-subgraph and inter-subgraph parallelization. The first identifies subgraphs that contain vertices that can be accessed in parallel, while the latter identifies subgraphs that can be processed in their entirety in parallel. To showcase the benefit of our approach, we build within CAVE parallelized versions of five popular graph algorithms (Breadth-First Search, Depth-First Search, Weakly Connected Components, PageRank, Random Walk) that exploit the full bandwidth of the underlying device. CAVE uses a blocked file format based on adjacency lists and employs a concurrent cache pool that is essential to the parallelization of graph algorithms. By experimenting with different types of graphs on three SSD devices, we demonstrate that CAVE utilizes the available parallelism, and scales to diverse real-world graph datasets. CAVE achieves up to one order of magnitude speedup compared to the popular out-of-core systems Mosaic and GridGraph, and up to three orders of magnitude speedup in runtime compared to GraphChi.en_US
dc.language.isoen_US
dc.publisherACMen_US
dc.relation.ispartofseriesProceedings of the ACM on Management of Data;
dc.rights© 2024 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution International 4.0 License. This article has been published under a Read & Publish Transformative Open Access (OA) Agreement with ACM.en_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectHardwareen_US
dc.subjectInformation Systemsen_US
dc.subjectCommunication Hardware, Interfaces and Storageen_US
dc.subjectExternal Storageen_US
dc.subjectData Management Systemsen_US
dc.subjectDatabase Design and Modelsen_US
dc.subjectGraph-based Database Modelsen_US
dc.subjectInformation Storage Systemsen_US
dc.titleCAVE: concurrency-aware graph processing on SSDsen_US
dc.typeArticleen_US
dc.identifier.doi10.1145/3654928


This item appears in the following Collection(s)

Show simple item record

© 2024 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution International 4.0 License. This article has been published under a Read & Publish Transformative Open Access (OA) Agreement with ACM.
Except where otherwise noted, this item's license is described as © 2024 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution International 4.0 License. This article has been published under a Read & Publish Transformative Open Access (OA) Agreement with ACM.