da Costa Sales, Guilherme (2016) Communication Avoiding Sparse Matrix-Vector Multiplication & GPGPU PFE - Project Graduation, ENSTA.
Available under License Creative Commons Attribution Share Alike.
Modern parallel computing relies on complex hardware architectures, composed of multicore processors and massively parallel computation units such as “many-core” processors or GPGPU accelerating cards. In the context of solving linear systems, currently available algorithms are not capable of fully exploiting these highly parallel architectures due to the gap between the time necessary to perform the arithmetic operations and the communication time needed to move data between different processors or different levels of the memory hierarchy. A proposed solution for this problem has been the development of communication avoiding (CA) algorithms, such as CA Krylov subspace methods, which aim to minimize these movements of data at the expense of making redundant computations. A key component of these methods is the calculation of the Krylov basis vectors Ax, A2x, ..., Akx, known as the matrix powers kernel. This work studies and develops communication avoiding sparse matrix-vector multiplication (SpMV) algorithms that seem to offer a solution for calculating the matrix powers kernel in distributed and hybrid CPUGPU computing environments.
|Item Type:||Thesis (PFE - Project Graduation)|
|Uncontrolled Keywords:||high-performance computing, parallel programming, numerical linear algebra, communication avoiding algorithms|
|Subjects:||Information and Communication Sciences and Technologies|
Mathematics and Applications
|Deposited By:||Guilherme Da Costa Sales|
|Deposited On:||19 sept. 2016 14:12|
|Dernière modification:||19 sept. 2016 14:12|
Repository Staff Only: item control page