Author

Prince Boadi

Date of Award

Summer 8-2025

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Cyber Defense (PhDCD)

First Advisor

Austin O'Brien

Second Advisor

Cherie Noteboom

Third Advisor

Varghese Mathew Vaidyan

Abstract

As artificial intelligence (AI) systems rapidly assume responsibility for processing large volumes of sensitive personal data, organizations struggle to identify which privacy safeguards deserve immediate, sustained investment. This study develops a practitioner-informed framework that ranks and organizes privacy controls for AI environments, thereby bridging the persistent gap between regulatory mandates and implementation realities. Using grounded theory and thematic analysis of 152 qualitative responses drawn from experts in technology, healthcare, government, defense, and education, we coded interview data in ATLAS.ti to surface three interdependent control layers—Strategic Governance Controls, Operational Controls, and Technical Controls—that map closely to the NIST Privacy Framework and expand the People, Process, Data, and Technology (2PDT) model. Strategic Governance emphasizes executive-level accountability, ethics committees, and privacy-by-design mandates embedded as early as Step Zero of the NIST Risk Management Framework; Operational controls translate these mandates into day-to-day routines such as accounting of disclosures, contractor oversight, privacy training, incident response, and iterative Privacy Impact/Risk Assessments; Technical Controls supply the enabling mechanisms, including data-quality assurance, encryption, anonymization, access control, continuous monitoring, and data minimization-retention practices. Expert rankings assign the highest criticality to data quality and integrity, privacy-enhanced design, and continuous monitoring, underscoring a consensus that reliable data and built-in privacy architectures are foundational to trustworthy AI. The resulting framework delivers actionable guidance for developers, auditors, and policymakers: it clarifies which controls must be prioritized, demonstrates how socio-technical coordination sustains compliance, and offers concrete touchpoints for auditing AI systems against evolving standards such as GDPR, HIPAA, CCPA, and emerging U.S. AI-governance directives. By aligning technical measures with strategic oversight and operational accountability, the study provides a defensible roadmap for privacy-by-design in AI, reduces implementation uncertainty, and strengthens stakeholder trust in data-driven innovation.

Share

COinS