Date of Award

5-1-2024

Degree Name

Master of Science

Department

Computer Science

First Advisor

Imteaj, Ahmed

Abstract

In this study, we explore the vulnerability of Variational Autoencoder Generative Adversarial Networks (VAEGANs) to membership inference attacks, focusing on the potential privacy risks associated with Generative AI. By employing both black-box and white-box attack methodologies, we aim to quantify the extent to which sensitive information can be discerned or reconstructed by an adversary. Our investigation reveals significant vulnerabilities within the VAEGAN framework, highlighting the pressing need for enhanced privacy protections in generative models. Through a series of experiments across diverse datasets, we assess the impact of these attacks on the integrity of the models, shedding light on the balance between data generation capabilities and privacy preservation. Our findings underscore the importance of developing robust mechanisms to safeguard against such privacy breaches, contributing to the broader discussion on securing Generative AI against emerging threats.

Share

COinS
 

Access

This thesis is only available for download to the SIUC community. Current SIUC affiliates may also access this paper off campus by searching Dissertations & Theses @ Southern Illinois University Carbondale from ProQuest. Others should contact the interlibrary loan department of your local library or contact ProQuest's Dissertation Express service.