Efficient Sparse Neural Network Training

dc.contributor.advisorChoi, Jee
dc.contributor.authorShvedov, Konstantin
dc.date.accessioned2022-10-04T19:43:35Z
dc.date.available2022-10-04T19:43:35Z
dc.date.issued2022-10-04
dc.description.abstractDevelopments in neural networks have led to advanced models requiring large amounts of training time and resources. To reduce the environmental impact and to decrease the training times of models, acceleration techniques have been developed. One method is neural network pruning, which removes insignificant weights and preempts the generation of sparse models. This paper attempts to improve and explore a method of training sparse neural networks efficiently processing only non-zero values using optimized just-in-time kernels from the Libsxmm library while randomly pruning network layers at initialization. The algorithms explored within this paper show a proof of concept and the possibility of improving training time beyond what the highly optimized PyTorch library is currently capable of. Through the work in this paper algorithm's processing times are sped up over 100-fold. Further, this work provides additional evidence that advanced pruning algorithms and other improvements can significantly reduce training times and resources.en_US
dc.identifier.urihttps://hdl.handle.net/1794/27618
dc.language.isoen_US
dc.publisherUniversity of Oregon
dc.rightsAll Rights Reserved.
dc.subjectJITen_US
dc.subjectMatricesen_US
dc.subjectNeural Networksen_US
dc.subjectPruningen_US
dc.subjectSparseen_US
dc.titleEfficient Sparse Neural Network Training
dc.typeElectronic Thesis or Dissertation
thesis.degree.disciplineDepartment of Computer and Information Science
thesis.degree.grantorUniversity of Oregon
thesis.degree.levelmasters
thesis.degree.nameM.S.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Shvedov_oregon_0171N_13358.pdf
Size:
431.1 KB
Format:
Adobe Portable Document Format