Robustness of Neural Networks for Discrete Input: An Adversarial Perspective

dc.contributor.advisorLowd, Daniel
dc.contributor.authorEbrahimi, Javid
dc.date.accessioned2019-04-30T21:09:38Z
dc.date.available2019-04-30T21:09:38Z
dc.date.issued2019-04-30
dc.description.abstractIn the past few years, evaluating on adversarial examples has become a standard procedure to measure robustness of deep learning models. Literature on adversarial examples for neural nets has largely focused on image data, which are represented as points in continuous space. However, a vast proportion of machine learning models operate on discrete input, and thus demand a similar rigor in understanding their vulnerabilities and robustness. We study robustness of neural network architectures for textual and graph inputs, through the lens of adversarial input perturbations. We will cover methods for both attacks and defense; we will focus on 1) addressing challenges in optimization for creating adversarial perturbations for discrete data; 2) evaluating and contrasting white-box and black-box adversarial examples; and 3) proposing efficient methods to make the models robust against adversarial attacks.en_US
dc.identifier.urihttps://hdl.handle.net/1794/24535
dc.language.isoen_US
dc.publisherUniversity of Oregon
dc.rightsAll Rights Reserved.
dc.subjectAdversarial machine learningen_US
dc.subjectGraph neural networksen_US
dc.subjectMachine translationen_US
dc.titleRobustness of Neural Networks for Discrete Input: An Adversarial Perspective
dc.typeElectronic Thesis or Dissertation
thesis.degree.disciplineDepartment of Computer and Information Science
thesis.degree.grantorUniversity of Oregon
thesis.degree.leveldoctoral
thesis.degree.namePh.D.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Ebrahimi_oregon_0171A_12374.pdf
Size:
1.45 MB
Format:
Adobe Portable Document Format