Artificial Intelligence with Data Protection Act - Part 2

Irshad JACKARIA

AI in DPA2017 (PART 2) - In PART 1, we saw how specific sections of the Mauritian Data Protection Act 2017 directly govern AI. However, the true regulatory power of the Act lies in its foundational principles (Section 21), which create significant, and often underestimated, compliance duties for any AI system to be implemented.

Here are the principles that, for me & for now, pose the greatest challenges to traditional, "data-hungry" AI:

Lawfulness, Fairness, and Transparency (Section 21(1)(a)): An organization must be able to explain to data subjects, in a clear way, how the AI is using their data and how it makes decisions.

The Challenge: This goes far beyond the "black box" problem. The "right to be informed" (Section 32) implies an explanation that is intelligible to the average person, not just a 120-page paper for data scientists. Furthermore, the "Fairness" principle is a landmine. An AI model can be built on 100% lawfully collected data but still produce discriminatory or biased outcomes (e.g., in loan approvals). This would be a breach of the "fairness" principle.

Purpose Limitation (Section 21(1)(b)): Data collected for one purpose (e.g., "improving customer service") cannot simply be fed into a new AI model for a completely different purpose (e.g., "profiling customers for new product to be launched") without a new legal basis.

The Challenge: This is the "secondary use trap" that many companies fall into. They assume that data collected for billing (Purpose A) can be freely re-used to train a new AI marketing model (Purpose B). The DPA makes it clear that Purpose B is a new processing activity requiring its own lawful basis (like new, specific consent).

Data Minimisation (Section 21(1)(c)): This principle directly conflicts with the "data-hungry" nature of many AI models. The Act insists that organizations must only process data that is "adequate, relevant and limited to what is necessary."

The Challenge: This is a direct confrontation with the "big data" ethos of "collect everything, just in case." The DPA forces a paradigm shift to Privacy by Design, demanding that companies justify every single data point they feed an AI. The question must change from "Can we use this data?" to "Is it absolutely necessary to use this data for this one specific, defined purpose?".

Accuracy (Section 21(1)(d)): This is the unspoken challenge (yet another) of Gen AI. The Act demands that personal data be "accurate and, where necessary, kept up to date."

The Challenge: AI models, especially generative ones, can "hallucinate"—inventing plausible but false information about individuals (even though I always say that it hallucinated because of you .. and your prompt). If an AI generates an incorrect profile (e.g., a false summary of a customer's credit history) and that is used in a decision, it's a clear breach of the accuracy principle.

#AICompliance #DataProtection #Mauritius #DPAMauritius #AIforBusiness #GDPRAlignment #AIandLaw #IrshadJackaria