Artificial Intelligence with Data Protection Act - Part 1

Irshad JACKARIA

AI in DPA2017 (PART 1) - In every corporate “Fintech & AI” training and consultancy I’ve done since Oct 2019 (oh yes, we happen to be the pioneers), one recurring question arises: Can we feed customer data into AI models without breaching privacy laws?

The answer lies in the Data Protection Act 2017. Based on my analysis as at Nov 2025, I can say that it is reasonably well-equipped to govern the adoption of AI in Mauritius, primarily because it is closely aligned with the EU's GDPR, which was designed to be technology-neutral. The Act doesn't use the specific term "AI," but it includes several key provisions that directly regulate the processes & risks associated.

Specific parts of the DPA 2017 that, for me, are initially relevant to AI implementation:
1. Automated Individual Decision-Making and Profiling (Section 38)
This is the most direct & powerful part of the Act for regulating AI.
• What it says: The Act gives data subjects the right not to be subject to a decision based solely on automated processing (like an AI algorithm) if that decision produces legal effects (like a loan rejection) or similarly significant effects (like being sifted out of a job application process).
• Relevance to AI: This means a Mauritian organization generally cannot use an AI model to make a critical decision about a person without any human intervention. If they do, the individual has the right to:
o Be informed about the automated decision.
o Request human intervention.
o Express their point of view and contest the decision.

2. Data Protection Impact Assessments (DPIAs) (Section 34)
• What it says: A controller must carry out a DPIA before starting any processing that is "likely to result in a high risk" to the rights & freedoms of individuals. The Act specifically calls this out for "new technologies."
• Relevance to AI: Implementing a new AI system, especially for profiling, large-scale surveillance ( AI-powered CCTV), or processing sensitive data (insurance, healthcare), would almost certainly be considered high-risk. The DPIA would force the organization to proactively identify, assess, and mitigate the privacy risks (like algorithmic bias) before the system goes live.

3. Strict Rules on "Special Categories of Personal Data" (Section 29)
• What it says: The Act prohibits the processing of sensitive data (e.g., racial or ethnic origin, political opinions, religious beliefs, health data, genetic or biometric data) unless under very specific and strict conditions.
• Relevance to AI: AI models can sometimes infer sensitive information even from non-sensitive data. For example, an algorithm might infer someone's health status or political leanings from their online activity. This provision means organizations must be extremely careful that their AI models are not, even accidentally, processing this kind of data without an explicit legal basis.

#AICompliance #DataProtection #Mauritius #DPAMauritius #Irshad #IrshadJackaria #Jackaria