Artificial Intelligence with Data Protection Act - Part 3

Irshad JACKARIA

AI in DPA2017 (PART 3) - In Parts 1 & 2, we explored the foundational principles and heavy compliance mechanisms facing organizations deploying AI. Now, as AI systems become more autonomous & opaque, the rights guaranteed to us under the DPA become our only shield.

1. The Right of Access vs. The "Black Box" (Section 36)

Everyone has the right to know if their data is being processed & to access that data in an "intelligible form" (Section 36(1)(b)).

The Mauritian Challenge: In traditional IT, you print a database record. In AI, data is transformed into complex mathematical weights within a neural network. If an AI predicts a customer is a "high churn risk," does the customer have a right to just the raw data they provided, or the AI-derived insight (the risk score)?

The Thought-Provoking Reality: The requirement for an "intelligible form" is the sticking point. If an organization cannot explain how the AI used specific data points to arrive at a decision due to the "black box" nature of the model, they may be failing to provide access in an intelligible manner. This challenges companies to invest in Explainable AI (XAI) not just for ethics, but to fulfil a basic legal right.

2. The Right to Rectification vs. The "Retraining Dilemma" (Section 37)
Individuals have the right to have inaccurate personal data rectified without undue delay.

The Mauritian Challenge: If an AI makes a decision based on inaccurate historical data, correcting the database record for that one person is easy. But it does not fix the algorithmic bias the model learned from that bad data.

The Thought-Provoking Reality: True rectification in the AI era might require more than editing a field. If a model is demonstrably biased due to flawed training data, does DPA compliance require the organization to retrain or significantly modify the expensive AI model? And, what about decisions already made based on the inaccurate data? The duty to rectify may extend to revisiting past automated decisions.

3. The Right to Object & Erasure vs. "Machine Unlearning" (Sections 39 & 40)
The "right to be forgotten" (Erasure) and the right to object based on legitimate interests are powerful tools for individuals.

The Mauritian Challenge: How does an AI "forget"? If your data was used to train a massive Large Language Model (LLM), your information is now embedded in the model's fundamental parameters. You cannot easily extract it like a row in an Excel sheet.

The Thought-Provoking Reality: "Machine unlearning" is a monumental technical challenge that few companies have solved. If an organization cannot guarantee the erasure of data from its AI's memory, they may need to cease using that model entirely to comply with a valid erasure request.

At Knowledge of the ART and Cover & Above, we don't just talk about these challenges; we are the first to align our AI seminars & projects fully with the realities of the DPA 2017.

#AICompliance #Mauritius #DPAMauritius #AIandLaw #IrshadJackaria