Signal in Bangalore

Signal in Bangalore

Signal Foundation's President - Meredith Whittaker - and the Signal team talks about AI, encryption and more.

About the public events

This public event is hosted in collaboration with the Signal Foundation, The Fifth Elephant and Bangalore International Centre (BIC). The event consists of a talk and a panel discussion.
This event is free to attend.

How do organizations like Signal sustain in an environment of mass surveillance - talk by Meredith Whittaker

The tech ecosystem, as it is today, is shaped by concentrated power. The natural monopolies of the telecommunications sectors were not, in fact, disrupted when networked computation was commercialized, and when the internet took shape in the late 1990s. Big Tech simply replicated this monopolistic form, acquiring data and infrastructural monopolies via the surveillance business model, which incentivized mass surveillance and profiling, and the growth-at-all-costs mindset that has got us where we are today.

This concentrated tech power resides in the hands of a handful of corporations, primarily Big Tech companies, which are based in the US and China. As they extend their reach and authority on the wings of the current AI hype, they are shaping an ecosystem that is increasingly hostile to new entrants and small players. Indeed, most startups or small tech endeavors must, in order to function, license infrastructures and use frameworks and libraries controlled and shaped by these large companies.

So, how can organizations like Signal confront and deal with mass surveillance of this industry, sustain and grow in this environment? Join the President of Signal Foundation and renowned AI scholar, Meredith Whittaker, along with members of the Signal team, to discuss the approaches that Signal has adopted in pushing back against the conjoined threats of mass surveillance and enhanced social control of the AI hype wave.

This talk will be followed by Q&A with Meredith Whittaker, Joshua Lund (Senior Director), moderated by Kiran Jonnalagadda, CEO at Hasgeek.

Panel discussion - AI: Beyond the hype cycle

In the aftermath of ChatGPT-fueled AI hype, there is an equally charged conversation on how the public and governments should respond to present (and future) harms related to these technologies. It is a crowded space, with AI industry voices and existential risk (x-risk) doomers trying to shape the narrative on regulation alongside civil society advocates and government agencies.

With many combined decades of experience critiquing and working within the tech industry, Meredith Whittaker, Amba Kak, Udbhav Tiwari, and Vidushi Marda will share their insights and perspectives on the current AI hype wave and the related policy landscape. The panel will particularly focus on the threats this poses to privacy, and the ways that the dominant narratives are getting AI wrong.

Vidushi Marda will moderate the panel discussion.

About the speakers

Meredith Whittaker is the President and a member of the Signal Foundation Board of Directors. She is also a scholar of AI and the tech industry responsible for it, and the co-founder and Chief Advisor to the AI Now Institute.

Amba Kak is a technology policy strategist and researcher with over a decade of experience working in multiple regions and roles across government, academia, tech industry, and the non profit sector. Amba is currently the Executive Director of the AI Now Institute, a New York based policy research center focused on artificial intelligence. She is also on the Board of Directors of the Signal Foundation; and on the Program Committee of the Board of Directors of the Mozilla Foundation.

Vidushi Marda is an independent lawyer working on technology regulation, asymmetric power relations and fundamental rights to advance social justice. She is the co-Executive Director of REAL ML, a non profit organization that translates algorithmic accountability research into impactful interventions that benefit the public interest.
Vidushi’s work has been cited by the Supreme Court of India in a historic ruling on the Right to Privacy, the United Kingdom House of Lords Select Committee on Artificial Intelligence, and the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, among others.

Kiran Jonnalagadda is a tech enthusiast and community builder, passionate about technology and its societal impact. He co-founded Hasgeek in 2010, where he has created a space for technologists to thrive, share knowledge, and network.
Kiran has been actively involved in the Free and Open Source Software (FOSS) movement, and with the SaveTheInternet.in movement for net neutrality in India.

Udbhav Tiwari is the Head of Global Product Policy at Mozilla, where he focuses on cybersecurity, AI, and connectivity. He was previously with the public policy team at Google, was a Non-Resident Scholar with Carnegie Endowment for International Peace, India and was a program manager at the Centre for Internet and Society (CIS) in India. He is also a former member of the Advisory Council at the Digital Equity Accelerator run by the Aspen Institute.

Key takeaways for participants

  1. For startups and small tech endeavors - how to navigate the tech ecosystem that Big Tech continues to shape and control.
  2. Awareness about the harms related to AI technologies, and how should the public and governments respond to them.
  3. Knowledge about the AI policy landscape, and how is it being shaped on a global scale.
  4. Awareness about the dominant narratives around AI hype, and what are they getting wrong.

Who should participate

  1. Public interest technologists
  2. Engineers and open source developers who care about privacy.
  3. Founding teams from start-ups and series A companies who have questions about business models.
  4. Technology leaders - to understand the current context of AI and privacy, and emerging trends.
  5. Policy professionals working on tech/AI policy.
  6. Human rights advocates interested in the intersection of technology and rights.

Contact information

Join the @fifthel Telegram group or follow @fifthel on Twitter. For inquiries, call Hasgeek at +91-7676332020.

Hosted by

All about data science and machine learning

Anwesha Sen

@anwesha25

AI: Beyond the Hype Cycle

Submitted Oct 12, 2023

On 3rd October 2023, a public event was hosted in collaboration with the Signal Foundation, The Fifth Elephant and Bangalore International Centre (BIC). The event consisted of two sessions. The following is a summary for the second session, which was a discussion between Meredith Whittaker (President of Signal Foundation), Udbhav Tiwari (Head of Global Product Policy at Mozilla), and Amba Kak (Executive Director at AI Now Institute).

“AI”

AI is more of a marketing term than a term of art, and over 70 years in history, it’s been applied to a wide and disparate variety of technical approaches. Currently, what is being marketed as AI are data intensive and compute intensive probabilistic models. The data and compute are the concentrated resources at the heart of the surveillance business model that are currently in the hands of a few large companies, who are the only ones that can afford the resources to create these large scale models from development through deployment. The term AI is a signifier that carries a lot of weight, brings a lot of media, and moves a lot of philanthropic capital, whether or not it has a clear definition.

How does AI relate to your work?

Signal: AI is exacerbating the surveillance business model in the tech industry, within which Signal exists. It is calling for more and more data on everyone, producing surveillance, and is fundamentally a surveillance technology. It makes determinations about people, communities, politics, etc. and those determinations themselves become data. Large AI systems are calling for more surveillance on aspects of people’s lives to collect data which is numerated, put into databases, and fed into these systems. Signal is the world’s largest truly private messaging app and does not collect data. It exists as a counter force against the surveillance economy and the surveillance capacities of these increasingly large AI systems.

Mozilla: Mozilla has recently started an entity called mozilla.ai which is a research lab that will attempt to explore openness in AI. On the other hand, Mozilla Foundation has also worked under the broader umbrella of trustworthy AI regarding bias, discrimination, etc.

AI Now Institute: In the policy space, one thing that the AI Now Institute is doing is to replace the term “AI” with automated decision systems (ADS) as it moves attention away from these magical, potentially intelligent systems, to systems that are making decisions about allocating resources, curating people’s social and economic lives, etc. This term has successfully gained quite a lot of currency in policy conversations, particularly in the US. Another aspect of policy in AI is the claim that there is a blank regulatory slate for new AI technologies that are being developed. Considering the various components of AI like data, compute, and certain kinds of algorithmic and statistical models, there are policies regulating all of these, such as data privacy laws, competition law, etc. which are then also AI policies. Hence, the claim that there are no regulations is false.

The AI Hype Spectrum: Killer Robots or the Best Thing Ever?

The term AI is wrapped up in a lot of mythologizing where the marketing narrative is that of something path-breaking and achieving human-like intelligence. This is selling the technology that can only be developed and deployed by a handful of corporations, largely based in the US and China. Governments and corporations also want this technology on their side to figure out how to use that technology to control their workers, populations, and to otherwise benefit themselves.

As for regulations, there is a trend where people in charge of AI companies are moving from being against regulation because that would stifle innovation to being for high levels of regulations because this technology is dangerous. The simplified analysis behind this trend is that these companies want to create regulatory moats around themselves and want to make it easier for them to create these products and services, even if the price for this is something as harsh as licensing. The traditional way in which technology has either been kept in check and has been made a communal resource, are ideas like open source. In open source, people create code that other people can use, understand, and then deploy. This very model is also a threat to the idea of these companies having the sole ability to create or to perform generative AI functions, and they view this as a threat. While open source may not fix the problems with AI, it offers a different lens to it that is not complete corporate control.

The conversation around AI regulations is also one that is highly securitized. One version of this argument is that promoting accountability, regulations, and antitrust interventions threaten national security interests by slowing down innovation and development, whereas the other version is that large language models (LLMs) may end up creating bioweapons. Both arguments inflate reality out of proportion and take away from the actual issues at hand, which is that accountable and less concentrated AI systems are what are necessary for national security.

Open Source AI

There is another spectrum when it comes to open source AI. On one hand there is the fear that open source AI would lead to bad actors getting easy access to a potentially dangerous technology, and on the other hand, open source AI will democratize everything. The fundamental issues here are that AI has not even been properly defined, let alone open source AI. This leads to ambiguities around what open source AI really is. There is a huge barrier to start creating AI models as it requires huge amounts of training data, compute, labor, and other resources, which is antithetical to “openness”.

However, what is currently seen as open AI, i.e. where one can view the model’s code, has certain benefits, such as it allows for transparency, independent verification of the code, and checking it for various harms. It also enables others in the ecosystem to engage with the technology in a way that expands the paradigm that companies would traditionally be comfortable with. The question of liability comes to the forefront here which is something that product liability regulation has also not dealt with. As for open source software, one’s license says whether or not they are liable but the same does not exist if someone made minor tweaks to an open AI model.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

All about data science and machine learning