By Kam Sandhu @KamBass

On 24th August, a nine judge bench of the Supreme Court ruled that privacy was a fundamental human right, in a landmark case against the Indian government. Citizens and activists are fighting the dramatic expansion of the government’s biometric ID scheme (Unique Identification (UID) or Aadhaar) which threatens to become the most evasive mass surveillance project ever undertaken.

Since it’s launch in 2010, UID has seen over a billion Indian citizens enrolled onto what is now the world’s largest biometrics database. Initially promoted as free, voluntary and in aid of the  the ‘needy’ and ‘undocumented’ in order to ease their access to state help, the 12-Digit Identification scheme has become increasingly mandatory. Despite a previous Supreme Court ruling ordering mandatory enrolment be restricted to 6 select distribution services, UID is required to access rehabilitation, HIV therapy, children’s midday meals and over 50 other services. Additional details held by the Aadhaar card include marriage registry, mobile phone numbers, tax filing records, scholarships and more.

Citizens have been left in a Catch 22; forced to hand over biometric details (including fingerprints, facial recognition and iris scans) to an expanding system they cannot exit, fast creating an almost absolute level of mass surveillance not seen before. Government meanwhile has neglected it’s responsibility with the data, ignoring security warnings amidst several serious leaks including the data of millions of citizens.

Simultaneously, Prime Minister Narendra Modi has made light work of liberalisng the data. A 2016 bill allowed corporate and individual use of Aadhaar as verification for any purpose, and 2017 brought ‘Aadhaar pay’ – a cashless payment platform, allowing peer-to-peer transactions using UID.

The access of private companies, and the increasing data points linked to UID form the remaining part of the battle at the Supreme Court, where more than 20 other petitions from activists remain. Despite winning a historic round, there is a long way to go, and this could set an important precedent for digital rights across the world.

The Entrance of FinTech

Aadhaar Pay signalled the government’s eagerness to move people onto digital payment platforms. The opportunities can be positive for a country where millions hold no bank account but the covert methods used to affect this transition are concerning its recipients. ‘It is complete coercion’ says Usha Ramanthan, human rights lawyer and leading voice against the ‘function creep’ of UID. Ramanathan described Aadhaar as the ‘marketing con job of the century’ in 2014, and has documented fervent expansion and data mining under the scheme.

This coercion was further revealed when Aadhaar Pay was promoted as a post-demonetisation strategy. Demonetisation saw the government remove, overnight, the 2 largest notes in Indian currency in November 2016 – amounting to 86% of cash in a country where more than 90% of transactions are made this way. It was a supposed tough stance on tackling India’s black money problem according to Modi, and he was credited as such. However, the narrative quickly changed to introducing platforms such as Aadhaar Pay. On August 31 the Reserve Bank of India announced 99% of notes had found their way back into the system branding demonetisation a failure, if of course, black money was really the target.


FinTech companies are hoping to challenge the traditional financial world order (think cashless societies, virtual banks, peer-to-peer transactions). They currently traverse an unregulated environment as do the data brokers compiling data points on almost every consumer on the planet. From this they are able to create credit ratings of us, using new kinds of information. For example, Facebook which aims to enter the FinTech market, created FB Messenger to allow peer-to -peer transactions, and creates assessments of users from semantics, images and social networks (Facebook recently patented the right to use social circles in its credit ratings). These financial assessments are also unregulated. Traditional financial institutions have rules around what data they can use and what information they can ask from you, while a company like Facebook is able to make these assessments from data points we have no idea about, through algorithms we don’t understand. This leaves ample room for discriminatory ratings and allows companies to secretly silo individuals into certain categories (e.g. low net worth). This can allow more predatory behavior as a result. Government programmes like Aadhaar can make these assessments even simpler for companies, and they are already eyeing up the possibilities. Speaking at India’s Economic Startup awards last month, V Vaidyanathan, Chairman of Capital First – a non-banking financial institution  said ‘The penetration of Aadhaar coupled with the use of AI has created a system where it is now possible to evaluate a customer within seconds and give him or her a loan.’

This power produces a reversal of the kind of liberatory choice technology was meant to create, and the exploits of lawlessness are well trodden by the corporate tech crowd ‘If you think that one of the Facebook founders Peter Thiel set up Paypal to evade financial regulations and made millions as a result. So they have always been into financial regulations,’ explains Sociologist Beverley Skeggs, who completed research on Facebook’s tracking capabilities earlier this year. The condition of opaque transparency, where our lives and data are being mined by a covert elite is ‘the biggest political issue of our times’ according to Skeggs. And it threatens creating greater inequality everywhere.

In a recent Bloomberg column, Cathy O’Neil, author of algorithm expose ‘Weapons of Math Destruction’, discussed Big Data and the Healthcare insurance industry. She prophesised that with access to data, the industry may only insure, at higher prices, the healthy while pricing  out the ‘sick’, rendering the industry useless to society and defeating ‘the risk-pooling purpose of insurance’. It would make plenty of money for CEOs and Corporations, however.

Unfettered access to who we are and our behaviours for financial gain, by the corporate body tethered to pursuit of the bottom line at all costs, presents new threats in the digital age. At the same time, technology is able to use its woo as an ultimate solution to further penetrate our lives. Our blind faith in these ideas must end, explains O’Neil;

‘Algorithms are opinions embedded in code. It’s really different from what you think most people think of algorithms. They think algorithms are objective and true and scientific. That’s a marketing trick. It’s also a marketing trick to intimidate you with algorithms, to make you trust and fear algorithms because you trust and fear mathematics. A lot can go wrong when we put blind faith in big data.’

While the offer of digital ID may at first glance, elevate status for the poor, its supposed liberation is steeped in cultural marginalisation of the undocumented – pursued by governments but boiled down to derision for not holding an identity in the eyes of the state. When the Aadhaar card was first introduced, some were confused at the relevance of the ID as a solution to problems of work, jobs and schools which represented the usual symbols to ’empower the poor’. ‘This card cannot do anything’ one resident told The Hindu. This is worth noting as the UN embarks on a biometric ID scheme aimed at digitising 1.1bn undocumented people, alongside Microsoft’s Accenture. The ID will allow refugees to access aid they qualify for, according to the UN, reminiscent of the original framing of Aadhaar. Would biometric ID have jolted the inertia of the refugee crisis?  Unlikely, but Accenture, also one of the contractors on the UID technology, is morally indignant about the cause; ‘Digital ID is a basic human right’ the Managing Director told Fortune.

Further, these ID schemes can equally brand status on recipients forever. India is still a country dealing with its divisions, a caste system and currently under the power of a man seemingly indifferent to the violence against Muslims in his homeland and abroad. Individually, many social positions have reason to maintain their anonymity. For example, a prostitute seeking rehabilitation must surrender themselves to a system they cannot opt out of, in order to access services. They ‘want to bury their identity, and what they are threatened with tagging them with this identity in perpetuity,’ Ramanathan explains.

The developments in India are ‘one of the most significant revelations about the stealth by which companies enter into people’s personal lives,’ says Skeggs. ‘What is worse is they use our personal information to make a profit from us without our knowledge. How is this allowed to happen? This is more than digital rights, this is a basic human right – not to have our information sold without our knowledge.’ 

‘We’ll Colonise Ourselves’

If data is indeed the world’s new oil, FinTech, data companies and state are waging competent global dominance in the resource war, surreptitiously ceding supply from their own populations. 

Attorneys acting for the government in the Supreme Court case said Privacy was a ‘Western’, ‘elitist’ value, which cannot be elevated in India as it is a ‘poor, developing’ country.

Meanwhile in the US, citizens are plundered through the ‘trade off fallacy’. A 2015 report by Joseph Turow explained  that marketers in America justified ‘their data-collection practices with the notion of trade-offs, depicting an informed public that understands the opportunities and costs of giving up its data and makes the positive decision to do so’ while in reality there exists a knowledge-failure amongst the public for how their data could be exploited. The privacy terms and conditions which marketers argue should be read by the public, are legal documents not intended for the public to understand according to the lawyers that write them. Rather than being an informed public, Turow concludes most Americans are in a condition of ‘resignation’ over their data being mined.

These examples demonstrate how different arguments are used to capitulate data in differing climes, to the same powers.

While benefits are abound for the surveillance state in India, using US-origin biometric service providers – who operate as defence contractors and intelligence services at home – has extended the dangers of data mining abroad.

Security for who?

Ramanathan previously raised concerns about the companies contracted in on UID technology. L1 Identity Solutions for example, advertise their work with the US Central Intelligence Agency (CIA) on their site. They’ve also lobbied US government as a private homeland security contractor, raising fears about data sharing. ‘All our information is going to be handed to them’, Ramanathan told one interviewer.

All of these fears have received staggering confirmation. On 30th August, The Times of India reported UIDAI (the UID Authority of India – administrator and ‘custodian’ of the data) gave companies like L1 Identity Solutions and Microsoft’s Accenture ‘full access to unencrypted data with the permission to store it for seven years’ contrary to public statements that it was inaccessible. Further, a Wikileaks release on the same day as the Supreme Court’s privacy ruling suggests this information was funneled to the US’ deep state long ago.

The CIA’s ExpressLane project, revealed by the documents, was designed to exfiltrate biometric data from liason services, such as the National Security Agency. The data is extracted under the cover of an upgrade, with core components ‘based on products from Cross Match, a US company specialising in biometric software for the law enforcement and Intelligence Community.’

Cross Match became a certified supplier of biometric devices for the Aadhaar program in 2011.

Meanwhile in the US, increasing biometrics use at borders raises questions over whether the Indian Identity programme – advertised as helping and liberating the undocumented individuals on the periphery of Indian society – may now be weaponised to identify them as migrants at Western borders. In January, The Intercept reported Trump’s homeland security choices signaled increasing use of the technology for this reason. Silicon Valley, only too keen to oblige, jumped into the ‘biometric gold rush’ in a little reported push for this technology use in immigration, demonstrating once again the willingness of corporation and state to work together covertly for their own ends – often security in the latter and finance in the former. January also saw tech workers protesting the Headquarters of Palantir – a data analysis company founded by Silicon Valley giant and Trump supporter Peter Thiel – against the potential build of software to identify Muslims or ‘illegal immigrants.’

Beyond their capability: Mistakes Don’t Matter

India’s ID card is the first time biometrics has been used on such a scale. The technology was abandoned separately by the UK and the US before 2010, due to unreliability and intrusiveness. While technology has advanced rapidly in that time, there are still mistakes, and in some areas, huge failures rates. The complete reliance on this technology to provide the solution, as with algorithms, causes secondary problems leaving humans as the collatoral damage at the behest of machines. Of course, this is also coupled with the energy of the state and corporation to instil their new project at whatever cost. 

In India, this has resulted in a number of complaints of the wrong data being assigned, identities being taken and registered by other people, leaving some excluded from the ID system. Further, the bills brought in by Modi restrict access by the public or individual to their own data – making corrections even more difficult. This has been mirrored in digital online financial assessments where investigations have revealed sometimes more than half of the details held are incorrect but the credit ratings are difficult to correct due to the secretive nature of the companies and methods. It seems that despite these problems, interests urge on with the spread of biometrics and of these assessments which degrades further our power against an absurd force to wrongly categorise you and affect your standing in life and access to services. 

What next?

The win on privacy in India under the current conditions is a huge victory and will force re-assessment of Aadhaar, but there is a long way to go. India serves as an example of the speed, force and disregard with which state and corporation can act under the guise of ‘technological solution’.

These abuses are taking place worldwide, hence why India’s Supreme Court challenge against it’s own state will have an impact on the advancement of digital rights elsewhere – this warrants all of our attention.

Our data, and entwined in it our identities and lives, has been subject to a dearth of regulation and protection as the companies benefitting have ballooned. While Ramanthan explains the ‘market has been sold as the solution,’ it is a market with no protection, and no consumer power – rendering the transaction alien and forever to the disadvantage of the consumer. The ‘Trade Off Fallacy’ highlights  ‘Note that the notion that people’s lack of knowledge leaves them vulnerable to giving up their information to advertisers and retailers has not moved regulators in the United States to action….They are also free to construct profiles of individuals with the data, to share the data, and to treat people differently in the marketplace based on conclusions drawn from those hidden surveillance activities.’

It is a situation that works for tech companies. A recent survey found that while the tech elite feel they are liberal, they dislike labor unions and regulation – which can be read as the rights of their workers and their consumers. In an unregulated market, these companies have written the justification and the culture of the environment in their interests, summed up in Zuckerberg’s 2010 reported outburst ‘Privacy is dead on Facebook, get over it.’ 

We need a new assessment on digital rights worldwide, that does not leave open the opportunity for differing levels of abuse depending on the economic position of your country. This no doubt will be an ongoing process, but vital to a re-adjustment from the inertia we are quietly subservient to.

Tough regulation which re-asserts powers of the consumer and human being against corporations – with Internet giants in our sights – seems key. This too could have secondary effects on other areas of lived experience, as the last few years have all but eroded language placing human rights and privacy at the centre. The creation of global minimum privacy & regulation standards would prohibit territory-by-territory exploitation.

Financial Times’ Martin Sandbu recently suggested revisiting anti-trust laws used to channel popular resistance to the ‘stranglehold of large oil, industrial and railway companies’ at the turn of the century, and using regulation to make Internet platforms ‘behave in the public interest.’

Rights activist Aral Balkan, suggests effectively regulating company abuses while ‘funding/encouraging decentralized, free/open, interoperable [alternatives]’, to get us to transition from the Internet of Things to the Internet of People.’  This transition is warranted – Turow explains emphatically ‘this is about dignity.’ 

All eyes on India.

Look out for our upcoming interview with Usha Ramanathan

You can read more about the Trading Faces Project here.