Proposing a Data Privacy Impact Assessment (DPIA) Model for AI Projects under U.S. Privacy Regulations
Abstract
The rapid adoption of artificial intelligence (AI) across industries such as healthcare, finance, and technology has amplified concerns about data privacy and regulatory compliance. Current methodologies for conducting Data Privacy Impact Assessments (DPIAs) often fail to address the unique challenges AI systems pose, including algorithmic bias, data diversity, and opacity. This paper proposes a tailored DPIA model designed to navigate the complexities of AI projects under U.S. privacy regulations, including CCPA, HIPAA, and GLBA. The model integrates key components such as risk identification, stakeholder engagement, transparency, fairness, and robust data protection measures. Hypothetical and real-world case studies from healthcare, finance, and technology demonstrate the framework’s applicability and effectiveness in addressing compliance and ethical concerns. Practical recommendations for policymakers, organizations, and AI practitioners are provided to foster responsible innovation. The paper concludes by identifying future research directions, emphasizing the need to adapt the framework for emerging AI technologies and global regulatory standards.
How to Cite This Article
Grace Annie Chintoh, Osinachi Deborah Segun-Falade, Chinekwu Somtochukwu Odionu, Amazing Hope Ekeh (2024). Proposing a Data Privacy Impact Assessment (DPIA) Model for AI Projects under U.S. Privacy Regulations . International Journal of Social Science Exceptional Research (IJSSER), 3(1), 95-102. DOI: https://doi.org/10.54660/IJSSER.2024.3.1.95-102