The science of high performance algorithms for hierarchical matrices

dc.contributor.advisorBiros, George
dc.contributor.advisorVan de Geijn, Robert A.
dc.contributor.committeeMemberBatory, Don
dc.contributor.committeeMemberDimakis, Georgios-Alex
dc.contributor.committeeMemberMartinsson, Per-Gunnar
dc.creatorYu, Chen-Han, Ph. D.
dc.date.accessioned2018-09-17T16:06:42Z
dc.date.available2018-09-17T16:06:42Z
dc.date.created2018-08
dc.date.issued2018-08-15
dc.date.submittedAugust 2018
dc.date.updated2018-09-17T16:06:43Z
dc.description.abstractMany matrices in scientific computing, statistical inference, and machine learning exhibit sparse and low-rank structure. Typically, such structure is exposed by appropriate matrix permutation of rows and columns, and exploited by constructing an hierarchical approximation. That is, the matrix can be written as a summation of sparse and low-rank matrices and this structure repeats recursively. Matrices that admit such hierarchical approximation are known as hierarchical matrices (H-matrices in brief). H-matrix approximation methods are more general and scalable than solely using a sparse or low-rank matrix approximation. Classical numerical linear algebra operations on H-matrices—multiplication, factorization, and eigenvalue decomposition—can be accelerated by many orders of magnitude. Although the literature on H-matrices for problems in computational physics (low-dimensions) is vast, there is less work for generalization and problems appearing in machine learning. Also, there is limited work on high-performance computing algorithms for pure algebraic H-matrix methods. This dissertation tries to address these open problems on building hierarchical approximation for kernel matrices and generic symmetric positive definite (SPD) matrices. We propose a general tree-based framework (GOFMM) for appropriately permuting a matrix to expose its hierarchical structure. GOFMM supports both static and dynamic scheduling, shared memory and distributed memory architectures, and hardware accelerators. The supported algorithms include kernel methods, approximate matrix multiplication and factorization for large sparse and dense matrices.
dc.description.departmentComputer Sciences
dc.format.mimetypeapplication/pdf
dc.identifierdoi:10.15781/T2RR1Q58N
dc.identifier.urihttp://hdl.handle.net/2152/68459
dc.language.isoen
dc.subjectHierarchical matrices
dc.subjectFast multipole methods
dc.subjectKernel methods
dc.subjectParallel algorithms
dc.subjectHigh-performance computing
dc.titleThe science of high performance algorithms for hierarchical matrices
dc.typeThesis
dc.type.materialtext
thesis.degree.departmentComputer Sciences
thesis.degree.disciplineComputer Science
thesis.degree.grantorThe University of Texas at Austin
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Access full-text files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
YU-DISSERTATION-2018.pdf
Size:
2.91 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
4.45 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.84 KB
Format:
Plain Text
Description: