Digital IDs can make our lives easier. When done the wrong way, however, digital identity systems open the gates for the mass surveillance of everyday user activity and the abuse of our most sensitive data. From the beginning, we have therefore called for a safe digital ID that protects the user’s right and can’t be forced on anyone. This only works with strong technical and legal protections for privacy, predictability, inclusiveness, accessibility and the voluntary nature of these systems. We built alliances, published analyses and called for law makers to make the right decisions – at public events, with joint letters and in person in the European Parliament.


The term Digital Public Infrastructure (DPI) encompasses digital identity and also includes digital payments and data exchange. Such systems provide a digital platform that’s created by or on behalf of the state and that is used by people and companies to exchange data, identities, money, goods and services in a regulated way that provides certain legal assurances.


Risky Systems

With the creation of easy-to-use, cheap, legally-binding digital identity platforms we risk losing anonymity in many daily interactions. We call this risk over-identification, which is amplified when digital identity systems are opened to the private sector. When we are constantly asked for our most sensitive health, financial or identity information there is also the risk of over-sharing personal information. As these are general purpose systems, able to reach in all areas of life from shopping, doctor visits, public transport or online activity, observing user behaviour can create a panoptical situation in which operators of the system know everything everyone does all the time.

No digital system will fit all people. We should not forget the elderly, less digital literate, more privacy-friendly or stateless people. Therefore, we need to ensure that the law protects everyone who chooses not to use the system. There must be a right to opt-out of digital identity systems without suffering from negative consequences like higher prices or refusal of services. The law and administrative framework also has to ensure that everyone has a right to a Digital Public Infrastructure and that there are no costs attached to it.

For any DPI ecosystem to deserve the trust of people, it needs to have strong protections against bad actors and data hungry companies. Companies and public sector entities should have to register their use cases and be limited to only ask from people the information that is necessary for these use cases. Bad actors must be expelled from the ecosystem and their registration revoked. In many situations putting the full burden to say no on the shoulders of users would not be legitimate. Users need to be in control of their full transaction history with means to exercise their data subject rights, including launching a complaint against a company and demanding the deletion of their data from them. Whenever a request for information is received, the user needs to know who is asking (symmetrical identification).

Biometrics are very often used to authenticate someone using a digital identity system. Biometric data is particularly sensitive, since it can’t be changed like a password and not everyone has all fingers, eyes or other characteristics. Therefore, biometrics shouldn’t be a precondition for using Digital Public Infrastructures. Uploading such data to a cloud requires the prior explicit consent of the user and it needs to be specially protected.


Our work on this issue started in 2017 with a legal analysis of the Austrian national digital identity system. With the COVID-19 pandemic we were thrusted into a huge debate about public digital systems that deal with the health emergency. Most prominent about them were the contact tracing systems and the EU Digital Covid Certificates (QR codes for vaccination, recovery, testing), both exemplary in their high privacy-safeguards and earned trust from large parts of society. From 2021 to 2024 we were deeply involved in the creation of the European Digital Identity System (eIDAS reform, EUID Wallet). Starting 2023 we also worked on the Digital Euro and the Right to Cash legislation. In February 2024 our Executive Director was appointed Chair of the Governance Working Group of the DPI-safeguards project of the Tech Envoy of the United Nations. This project aims to create global safeguards for digital public infrastructures.


United Nations Global Safeguards Framework

Right after we concluded our work on the EU level, the UN started its project to create a global framework to make digital public infrastructures safe. We participate in this project with our Executive Director who was in February 2024 appointed Chair of the Governance Working Group. Such systems have become common around the world and very often we see them being abused against vulnerable parts of society or with negative privacy consequences for the whole population. Since global institutions like the world bank and foundations are promoting digital public infrastructures and also push for their inter-operability, we hope that this project can highlight the risks that these systems bring and increase the safeguards for them. The final report is expected by end 2024 and some sections about digital public infrastructures are expected to be added to the Global Digital Compact.


European System (eIDAS, EU Digital Identity Wallet)

The European Digital Identity Wallet is a powerful, general purpose technology for identification, authentication and attribute verification of natural and legal persons vis-à-vis public authorities and private companies, online and offline. In practice, every EU country will provide such a wallet as an app for smartphones with which their citizens can prove their name, date of birth, family status, financial situation, educational degrees or COVID-19 vaccination status to others in a legally binding way. It will also be mandatory for large online platforms like Facebook, Amazon or Google to support this system. E-government services, banks, energy providers and public transport services will also be obliged to use it. By 2030, the European Commission aims to have 80% of EU citizens use the system. The industry interest in this reform is huge as they no longer need to expend considerable resources to verify their customers (KYC) by themselves.




ID Austria

In 2017 the legal basis for the ID Austria system was created in national law. Back then we already pointed out some of the flaws in the systems and called for a change. Many of these demands could later be added to systems that are build on top of ID Austria, but the underlying architecture remains at odds with privacy-by-design. The current debate is mostly about ID Austria becoming a mandatory system by forcing certain groups in society to use it. No technical system should be a precondition for obtaining essential government services or information from employers. We have collected such cases and brought them to the media's attention, in order to create pressure for a right to choose for everyone. You can read our analysis of ID Austria (DE) and several of the adjacent systems in our blog and in our media appearances.


Key Demands to make Systems safe(r)

  1. Every citizen or resident of a country has a right to obtain digital identity free of charge. Use of the DPI is voluntary and horizontal obligations protect persons that are not using the system from being excluded, denied goods or services or disadvantaged in the private or public sector.
  2. A user interacting on a DPI system always knows the identity of the other party before personal information is exchanged. Who is asking makes a difference. Any information category asked from a user must be in a public registry of all DPI use cases. Users can file complaints and companies can be excluded from the DPI ecosystem.
  3. No personal information is shared without the user’s consent. A user can choose to comply with a request for information fully, not at all or partially by only selectively disclosing parts of the information they have been asked for.
  4. A privacy-by-design architecture prevents the operating authority of the DPI to obtain information about concrete user behaviour, without that user’s consent. Daily interactions on the DPI are invisible for the government and connected companies.
  5. A user interacting via the DPI with other parties is protected from tracking and profiling by privacy-enhancing technologies like pairwise-pseudonymous identifiers, zero-knoweldge proofs and unlinkability. A user cannot be identified with just one unique and persistent identifier.
  6. Users have a right to use freely chosen Pseudonyms not linked to their real identity whenever there is no legal obligation that they have to identify themselves.
  7. All DPI components must be available open source for public scrutiny. Tax-payer funded DPI must be available under a free software licence.
  8. A full list of transactions has to be available to the user of the DPI. This includes the identity of all parties the user interacted with, any information shared and means to request deletion.
  9. Biometric authentication must not be a requirement for using DPI. There has to be a way to obtain a digital identity and use DPI without handing over biometrical information. Storage of biometrical information on a central server requires prior explicit consent from the user. Biometrical information has to be specially protected.