Date of Award
Master of Science
Electrical and Computer Engineering
n recent years, the vulnerability of neural networks to adversarial samples has gained wide attention from machine learning and deep learning communities. Addition of small and imperceptible perturbations to the input samples can cause neural network models to make incorrect prediction with high confidence. As the employment of neural networks on safety critical application is rising, this vulnerability of traditional neural networks to the adversarial samples demand for more robust alternative neural network models. Spiking Neural Network (SNN), is a special class of ANN, which mimics the brain functionality by using spikes for information processing. The known advantages of SNN include fast inference, low power consumption and biologically plausible information processing. In this work, we experiment on the adversarial robustness of the SNN as compared to traditional ANN, and figure out if SNN can be a candidate to solve the security problems faced by ANN.
This thesis is only available for download to the SIUC community. Current SIUC affiliates may also access this paper off campus by searching Dissertations & Theses @ Southern Illinois University Carbondale from ProQuest. Others should contact the interlibrary loan department of your local library or contact ProQuest's Dissertation Express service.