GDPR regulations mandate any data processor - that handles user data - must provide two core functions:
- The data that they have about a user - what do you know about me?
- Deletion of user data - what I don’t want you to keep.
However, both these mandates are not easy for a data processor to comply with because they require the processor to have robust data management practices from day one of their operations. Making a company GDPR-compliant retrospectively is very hard since architects typically focus on designing for scalability, maintainability and data security in the early stages of a company.
We invite talks from practitioners who have gone through the journey of data engineering and how they have managed to balance the difficult goals of privacy, utility and scale within their organizations.
Who should participate:
- Data engineering architects
- Data Privacy Officers
- Data engineers
- Product managers
May I have my data please?
In a world where we collect as much user data to make their journey as personalized as possible, it’s also important to adhere to compliance requirements where a user can request both deletion (forget) or a dump (access) of their data. As a centralized data-platform team, this was a huge challenge for us at Disney + Hotstar. Owning the platform, while various teams owned the data meant that we had to provide a distributed way for every team to support data-subject requests while coordinating & orchestrating it altogether at the same time. In this talk, we discuss the various approaches we considered & discuss the architecture of handling data-subject requests that allows us to scale as more systems (as well as the data they use) grow within the ecosystem.