Privacy practices in the Indian technology ecosystem
A 2020 survey of the makers of products and services
2020 might have been heralding the decade of privacy. From leaks in video conferencing software to doubts over COVID-19 apps, privacy concerns have been front and centre. While most of the commentary is when things go wrong, there is an angle not talked about: the attitudes of the people and organizations who make the technology we use on a daily basis. Their work today shapes our experience of tomorrow, but we know little of their perspective.
Between April and December 2020, the Privacy Mode team at Hasgeek (henceforth “we” and “us”) surveyed the makers of tech products and services in India. We sought to understand whether privacy is a guiding factor in product development, or an afterthought in response to regulation and public sentiment.
We started by conducting interviews with experts, followed by focus group discussions. The learnings from these sessions helped frame hypotheses and a qualitative questionnaire. See our note on framing and review, and the detailed findings and analysis.
The survey finds that 90% of the respondents believe that their organization treats privacy as a fundamental right and that it attempts to realize that intent while building products and services. At the baseline, at least three in four respondents in authority roles require thinking about privacy and compliance issues across all organization sizes.
However converting that intent into practice remains an onerous challenge. Small and medium-sized organizations fare worse than large organizations. All three struggle with:
Some industry sectors are governed by regulatory regimes such as HIPAA, CCPA and GDPR. The responses within these sectors did not deviate greatly from the general responses. This suggests adding regulatory pressure on small and medium organizations does not automatically change their internal product development dynamics to make them more privacy-friendly.
Over a fifth of respondents across all organization sizes say there is a lack of a peer group across the tech ecosystem to discuss and find solutions to privacy concerns. This lack was felt both within the organization and outside of it. This points to a larger concern where even those who may want to implement privacy-respecting features in products or services do not have the adequate support or use-cases to guide them. As organizations get larger, their access to external peer groups reduces, indicating the development of an insular culture.
As one startup founder we interviewed said:
“Companies are doing a poor job communicating this (the need for user privacy and security)... I don’t think they are malicious. They’re lousy in their implementation and security, but there is no mal-intent or ill intent to begin with, they are lousy sometimes.”
The good news is that awareness about privacy and recognition of its importance as a practice – and as an end-user benefit – exists across leadership in large, medium and small organizations. The not-so-good news is that intent by itself is not sufficient to move organizations towards the ideal of protecting user privacy and user data.
As the data in this research shows, specialized personnel, budgets, processes and community resources have to be available in abundance if India wants a smooth transition towards a data protection regime. This is where the gap is.
Here are our recommendations for different stakeholders to move towards the privacy-preserving ideal, one step at a time.
Small and medium organizations struggle to establish a business model with repeatable unit economics. This is a paramount concern in the early stages. While there is intent to embed privacy practices in the product-development cycle, small and medium organizations do not have the skills or budgets.
As our data indicates, adding regulatory pressure does not improve outcomes. On the other hand, regulation can increase the compliance burden, thereby adversely affecting small and medium organizations and turning them into non-viable businesses.
This research recommends to the Joint Parliamentary Committee (JPC) and the Data Protection Authority (DPA) for the Personal Data Protection (PDP) bill that the bill’s provisions must provide for exemptions for organizations below a certain revenue or fundraise threshold, and that the government must invest in the development of community resources to make the law feasible.
Across all the segments, including large organizations, software engineering practices have not matured enough to be ready for PDP compliance from day one. There is also a lack of clarity at the operational level on the implementation of privacy processes such as consent management, data-security audits, data-erasure requests, anonymisation, and purpose limitation.
Technical standards have to evolve – both from within and by knowledge-sharing at the community level – to address these issues. The JPC and DPA must provide at least two years’ time for organizations to become compliant with the standards prescribed in the PDP bill’s provisions.
Business organizations, especially early and growth stage startups, generally refrain from sharing internal knowledge and practices with other organizations. Insecurity about growth and organizational secrets, and pressure from investors to grow Intellectual Property (IP), has led to a culture of withholding knowledge, making it hard to develop capacity.
The structural capacity issues that this research refers to – skill gaps and lack of knowledge resources – can be resolved if leadership, legal, and data privacy officers in small and medium organizations become forthcoming about systematic best-practice and knowledge sharing. Journeyman documentation, which lays out the map of these practices at operational, and technical levels, will be more beneficial than broad directions.
While simplistic solutions may get widespread adoption because they substitute for missing capacity, they will hamper small and medium organizations from competing on a global scale, mainly because these organizations may have to maintain and invent different sets of practices to comply with standards such as GDPR and CCPA. This will increase the cost of compliance.
For investors, advisors and external stakeholders, as well as community organizations and founders of small and medium organizations, this research makes clear recommendations:
Build capacity by contributing to a shared knowledge pool beyond your organization, in the form of meetup groups and their knowledge archives
Encourage thinking about privacy at the product and engineering levels. Allow your practitioners to gain public recognition for their work.
Take privacy preserving ideas and practices in product development cycles to external forums, to policy makers and such, and encourage adoption across the ecosystem.
Scroll down for the individual chapters.
Nadika Nadja is a researcher at Hasgeek. She has worked across advertising, journalism, TV & film production as a writer, editor and researcher.
Anand Venkatnarayanan has worked as a cybersecurity engineer for two decades. He is currently consulting editor for Privacy Mode and a non-executive director at Hasgeek.
Anish TP illustrated the report. Umesh PN, Zainab Bawa and Kiran Jonnalagadda edited. David Timethy was the project manager.
We would like to thank the following individuals from tech, legal, and policy backgrounds who provided feedback during different stages of the research. Their feedback helped the team fine-tune and bring rigour to the research process. While Hasgeek sought funding from organizations, the research itself was conducted – with full disclosure at all stages – independently and objectively, and the findings do not reflect any individual organization’s needs.
Updates and corrections are posted in the Updates section. Use the “Follow” button to get notified.
Hosted by
Supported by
Submitted Apr 9, 2021
Both qualitative and quantitative methods were employed to conduct this research.
Sections:
[TOC]
We finalized the broad hypotheses and research questions in April–May 2020, and set up interviews with experts in June–July 2020. Experts were shortlisted on the basis of business sector diversity, domain expertise and industry experience.
In the first round, six experts were interviewed:
Each interview was held via online conference call, and was recorded with the consent of participants. Interviews were transcribed for reference. A second round of expert interviews were held in October–November 2020 to develop a historical account of privacy, corporate governance and business imperatives globally and in India. Four interviews were conducted with:
Focus Group Discussions (FGD) were conducted to learn:
The top-level hypotheses of resource allocation and operational priorities when it comes to building privacy-enhancing technologies.
Agency of individuals, mainly their ability to influence and shape decisions about incorporating privacy features into products and about the governance of user data.
The participants’ personal views about privacy.
FGD groups were heterogeneous. Participants were shortlisted on the basis of experience (senior roles, tending towards management versus frontline execution), and with diversity in sector, gender, location and expertise. Six FGDs were conducted online, via video calls, with participants from paytech, fintech, SaaS, social networking, and health tech. FGDs were recorded and transcribed with the consent of the participants.
As the research progressed in June–August 2020, the high-level hypotheses were fine-tuned based on the learnings from qualitative research. The initial hypotheses from April–May 2020 required surveying middle management and junior employees to understand their skill and competence, and their ability to influence decisions for implementing privacy-enhancing technologies inside organizations. The qualitative interviews revealed that:
At the junior level, there is neither authority to influence change, nor competence, nor awareness. Their opinions do not reflect the organization’s intent or policy, and we will necessarily have to look at other ranks for information.
At the middle management level, there is no authority, but there can be competence and intent.
At the executive level, there is authority and some intent, but uncertain competency.
The findings from the qualitative research strongly suggested that the quantitative survey’s focus must be shifted to the executive and middle-senior management to understand intent, processes, including hiring for privacy and compliance, and questions about business imperatives to determine the level at which privacy concerns were considered to be part of the organization’s overall culture. The quantitative survey was developed based on this renewed research direction.
“I deal with data scientists day in and day out, they think of themselves as algorithmic engineers, there are some latest and greatest algorithms and my job is to apply it to a given problem and they are interested in the business matrix around it. The degree to which value and judgement is applied is very limited, I’m talking about some of the senior most people. I’m not even talking about junior people, I track some of these things, the kind of advanced conversations that happen elsewhere rarely happens here. So this has to do with the larger training, the larger incentive structures and so on.”
—A founder of a machine learning startup, during an interview
Our initial survey consisted of approximately 20 questions, including a preliminary section that asked for information about the respondent – organization name, designation, nature of business – and a section about the organization’s privacy practices. We sought this information to classify respondents into those with authority and those without, to rank organizations based on their sizes, and to ensure we had a more diverse, representative sample. This section also asked respondents for personal information: their name, level of education, city, gender and caste group. However, personal questions were marked optional and respondents could skip them. See our note on why caste is relevant.
We disclosed the survey’s sponsor upfront at all stages. Some initial respondents asked if survey data would be shared with the sponsor. To clarify this, a confidentiality and ethics statement was added, along with the research team’s contact details, in case respondents required further clarifications.
We took efforts to ensure the survey would not take longer than ten minutes to complete, to lower the risk of abandonment.
The research team revised the survey a total of four times, based on internal input and external feedback. We approached three experts – in fintech, law, and AI ethics respectively – to review the questionnaire on:
Their recommendations:
We tweaked the survey questionnaire based on the feedback, with additional questions to detail out demographic information from respondents. We reached out to potential respondents – drawn from personal and professional networks – via email, as well as sharing on hasgeek.com and social media.
At all stages of the research process, we took steps to ensure that our own inherent biases do not reflect in the survey, and influence the respondents in any way. Both qualitative and quantitative parts of the research began with an ethical undertaking in which we disclosed organizations and supporters involved in the survey, and provided information regarding purpose of the research – what benefits or compensation respondents can expect, guarantee of anonymity, and publication of the findings.
Given the objective of the research was to learn about privacy practices, the survey was designed to elicit information about actions taken or in effect in organizations, rather than the attitudes and beliefs of respondents. Discussion points in the qualitative interviews and FGDs were drafted to understand specific practices and to confirm hypotheses, but the hypotheses themselves were not shared with participants. The questionnaire for the quantitative part of the survey was designed to elicit actual practices and not desired states. Respondents of the survey remained largely unknown to the research team and the survey was administered online, limiting the need for direct contact.
As mentioned in the section on Interviews and FGDs, and in the survey design section, respondents were chosen from both personal and professional networks, and the survey itself was shared on social media in order to get diverse participation. Participants for qualitative interviews and FGDs were shortlisted for not just their sectors and experience, but also to get a more representative sample from across the industry. Informed consent was obtained.
We analysed the survey results using an Intent, People, Process (IPP) framework, which is based on the Technology, Process, People and Culture framework of the Capability Maturity Model (CMM) that is commonly used in organizations, especially in tech organizations, to understand and assess an organization’s internal dynamics. Improvements in capability are an outcome of improvements in these four areas.
Given that the survey was looking at how organizations approached privacy respecting features of the products and services they were building – that is, the technology – the framework needed to be altered. We chose to analyse an organization’s intent – the stated objectives and policies towards privacy, and the budgetary allocation.
Hosted by
Supported by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}