Publish with us

Follow Penguin

Follow Penguinsters

Follow Hind Pocket Books

Kartik Hosanagar on Aadhar and the AI Conundrum

Algorithms and the artificial intelligence that underlies them make a staggering number of everyday choices for us. In his book, A Human’s Guide to Machine Intelligence, Kartik Hosanagar draws on his own experiences designing algorithms professionally, as well as on examples from history, computer science, and psychology, to explore how algorithms work and why they occasionally go rogue, what drives our trust in them, and the many ramifications of algorithmic decision making.

He examines episodes like the fatal accidents of self-driving cars; Microsoft’s chatbot Tay, which was designed to converse on social media like a teenage girl, but instead turned sexist and racist; and even our own common, and often frustrating, experiences on services like Netflix and Amazon.

Here’s the author’s perspective on the application of AI for Aadhaar verification:


Artificial Intelligence (AI) is ushering in innovations around the world and India is no exception. Fashion retailer Myntra has rolled out AI-generated apparel designs as part of its Moda Rapido and Here and Now brands. Gurgaon-based GreyOrange Robotics is deploying robots to manage and automate warehouses. Companies like Flipkart are trying to roll out voice integration into its shopping experience. Modern AI thrives on data, and the grand-daddy of all relevant datasets in India might well be Aadhar, the world’s largest biometric identification system with 1.2 billion records about citizens.

AI has many applications of such data, including detecting bureaucratic corruption in the way government funds are disbursed to citizens as well as identifying tax fraud by citizens themselves. Fintech startups are exploring how to use data from Aadhar and the broader India Stack to make credit approval decisions and enable financial inclusion of individual citizens as well as SMEs.

While there are several potential benefits, there are undoubtedly many challenges with an initiative like Aadhar. The most obvious one relates to data security and privacy of citizens especially when the scope of Aadhar has expanded well beyond the original objective of plugging leaks in welfare schemes and now includes many more aspects of citizen’s social and financial lives. Aadhar has therefore been the subject of several rulings by the Supreme Court of India and the many nuances of this debate have previously been discussed in this outlet. But there will also be a new kind of challenge as we mine data and make more decisions using modern AI. It will be tempting to apply AI to many new areas including tax compliance, real estate, credit approvals, and more.

In the U.S., there have been documented instances of AI bias. One well-known example was the use of algorithms to compute risk scores for defendants in the criminal justice system. These scores are used to guide judges and parole officers in making sentencing and parole decisions respectively. An analysis in 2016 showed that the algorithms had a race bias, i.e. they were more likely to falsely predict future criminality in black defendants than white defendants. Similarly, there have been examples of gender biases in resume-screening algorithms and race biases in loan approval algorithms. There will be similar concerns in India, perhaps heightened by lax regulatory oversight by governments and poor compliance by firms. As AI systems make decisions about which loan applications to approve, will they be susceptible to humans’ gender, religious, and caste biases? Will algorithms used to catch criminal behaviour also share these prejudices? Might resume-screening algorithms also have preferred castes and communities like some human interviewers?

Given the many challenges posed by algorithmic decisions based on large-scale data about citizens, I do believe that India needs clear regulations on data privacy and automated decisions that corporations and governments can make based on such data. In the EU, GDPR gives consumers the right to access data that companies store about them, correct or delete such data, and even limit its use for automated decisions. It bans decisions based solely on the use of “sensitive data,” including data regarding race, politics, religion, gender, health, and more. It also includes a right to explanation with fully automated decisions. Essentially, it mandates that users be able to demand explanations behind the algorithmic decisions made for or about them, such as automated credit approval decisions. Many proposals for privacy protection in the U.S. use GDPR as a template; the California Consumer Privacy Act (CCPA), for example, is often referred to as GDPR-Lite.

As the Aadhar effort has succeeded in creating a large biometric identification programme, we will soon enter a new phase when companies and governments try to build a layer of intelligence on top of the data to drive automated decisions. It is time to create some checks and balances. In my book A Human’s Guide to Machine Intelligence, I have proposed an algorithmic bill of rights to protect citizens when algorithms are used to make socially consequential decisions. The purpose of these rights is to offer consumer protection at a time when computer algorithms make so many decisions for or about us. The key pillars behind this bill of rights are transparency, control, audits, and education. Transparency is about clarity in terms of inputs (what does the algorithm know about us), performance (how well does the algorithm work), and outputs (what kinds of decisions does it make). Another important pillar is user control. Algorithm designers should grant users a means to have some degree of control over how an algorithm makes decisions for them. It can be as simple as Facebook giving its users the power to flag a news post as potentially false; it can be as dramatic and significant as letting a passenger intervene when he is not satisfied with the choices a driverless car appears to be making. I have also proposed that companies have an audit process in place that evaluates algorithms beyond their technical merits and also considers socially important factors such as the fairness of automated decisions. Lastly, we need more informed and engaged citizens and consumers of automated decision-making systems. Only by assuming this responsibility can citizens make full use of the other rights I just outlined.

Together, this algorithmic bill of rights will help ensure that we can harness the efficiency and consistency of automated decisions without worrying about them violating social norms and ethics.


Kartik Hosanagar is the John C. Hower Professor at The Wharton School of The University of Pennsylvania where he studies technology and the digital economy. He is the author of A Human’s Guide to Machine Intelligence.

 

More from the Penguin Digest

error: Content is protected !!