As the saying goes, data is the new oil. But who owns the rights to its value – the users or the services on which the data is generated? There are many different views about who personal data should belong to, its value to companies, and how those companies can gain the trust of their users. 


These are just some of the issues explored by Daren Brabham (Senior Director Analyst,, Florian Lichtwald (MD & Chief Business Officer, Zeotap), Thea Backlar (VP Product & Analytics, Ogury), Johan Vrancken (CRO, Nailbiter), Caroline Goulard (CEO, Dataveyes), and Antonio Anguiano (VP Product, Didomi) in our recent Yes We Trust Summit, a worldwide, 100% digital privacy event initiated by Didomi, and sponsored by Securys Limited & Zeotap to help people understand and inspire trust in the internet age. 


You can watch a replay of the discussion here.  







What is personal data?


The European Commission defines personal data as any information, such as email addresses, phone numbers, and ZIP codes, which could enable an individual to be identified.


It’s a fairly vague definition, however. The information captured in CCTV tracking, for example, was never considered personal data, as it couldn’t actually be used to identify a person. But advances in facial recognition technology mean it now fits the definition. 


It’s important, then, that as technology evolves, companies and governments continually update their own definitions of what constitutes personal data. If they fail to do so, they’ll lose the trust of the public. Without trust, people will no longer share their data, and its value to a business will be lost. 


For example, Google recently announced that, like Apple, Safari and Mozilla Firefox, it would stop supporting third-party cookies on its Chrome browser in 2022. In the absence of cookies, many companies will have to rely on data that users have consented to providing. This is fine if users trust that company. But if they don’t, it will become increasingly more difficult to identify those users, track their behaviour, and monetise that data. 


Who owns our data?


Organisations profit from their ability to understand our personal data to engage with consumers, in order to drive traffic and boost sales. This idea that our personal data has financial value to an organisation raises the question of whether we should have ownership of that data and be compensated for it in some way. 


The debate around data ownership falls roughly into two camps – one with a more liberal and economic view, the other taking a more social, political, and legal approach. 


Many people believe it’s right that they should have a share of the money their data generates. But we typically overestimate the value of our data to a company. In 2020, the total value of a user to Facebook was around $30 – and that included media inventory. If you asked a user how much they’d sell their data to Facebook for, however, there’d be a huge imbalance. 


Others see it as more of a collaboration. They believe their social graph is a joint construction between themselves and Facebook. It’s strange to think that data created when you use a service should only belong to you. 


There’s an ethical consideration, too. The idea that people can just sell their own personal data overlooks the fact that it can serve a collective purpose. Some sectors, such as healthcare, transport, and life sciences, need to collect personal data from millions of people for research, to fuel innovation, and to anticipate and address many problems yet to come. 


YWT  Social posts (32)


How do we want our data to be used?


If you were to ask people if they’d be willing to share or sell their personal information, many would say no. But if you asked them if they expect personalisation, you’d get a different answer. 


When it comes to personalisation, most people expect organisations to collect their data. 


It’s something that users have grown used to. We’re comfortable looking at recommendations on Amazon because we know they’re probably relevant to us. Indeed, according to a recent study by McKinsey, 80% of people expect retailers to provide personalised experiences. 


Personalisation is about convenience. Consent is about trust. To instil that trust, there’s a need, therefore, to give people more transparency into how their data is used, and what it’s used for.


Why does consent matter?



It’s important for a company to ensure any data it collects stays tied to consent. This must be honoured across every kind of interaction with a user at any point in time. But it can be overlooked.


The relationship between the data a company collects and its consent infrastructure can often break down through usage. Bigger companies will typically have multiple touchpoints – an app, a website, a physical store – but if they start collecting consent at each of these touchpoints at different times, in different ways, possibly with different messaging, and for different purposes, they’re just left with a pile of data, and various types of consent. 


They’re not able to use the data in any way that will have a positive impact on the user. It doesn’t improve the product, the service, the experience – it’s basically unusable. 


Ultimately, a company needs to manage consent, orchestrating it across real-time channels, across campaigns, and across all interactions. Only then can they honestly say to a user that they’ve honoured their consent, and the trust that user has placed in them. 



How much do we actually know about how our data is used?


There’s a huge gap in the public’s knowledge of matters like compliance and consent. Most people in Europe don’t know what GDPR is, for example, even though it affects their life on a day-to-day basis. They talk about how companies should be regulated more, but they’re annoyed every time a website asks about cookies.


Most people just don’t understand how their data is used, and what that means for them. So establishing trust comes down to education. In fact, given that the issues of data ownership and privacy are only going to become more important, there’s an argument to be made that the subject should be taught in school. After all, people aren’t going to fully trust a company if they don’t fundamentally understand how it uses their data, and what they’re agreeing to. 


Most companies comply with regulations like GDPR, of course. They’ll have a 24-page privacy policy that no-one reads, and a cookie banner. But they can’t rely only on regulation to gain consumer trust. They need individual users to be better educated, to better understand what’s happening, and to make better decisions for themselves. 



Conclusion… the decision is ours 


The big companies, the biggest consumers of data, are changing the world. The services that Google and Facebook bring to society are incredible, giving us access to almost unlimited information, connecting us with friends and family around the globe. It’s a fantastic contribution to humanity. 


But it’s all powered by data. We have to decide whether we want to share that data, to allow these companies’ algorithms to make recommendations for us. It’s up to us to choose to give our free will to someone else. Equally, we should be able to say no, I want to drive. 


It all comes down to trust, and trust is about experience. All the data a website collects should provide an experience for the user. With greater education, transparency, and consistent content, everyone can enjoy that experience. 



Didomi believes in putting users in the driving seat of their data, creating consent and preference management solutions that allow companies to place consent at the core of their strategy.


We believe brands can leverage compliance to turn data transparency and privacy into competitive business advantage. Indeed, these beliefs are why we are a founding sponsor of the Yes We Trust Summit.