GDPR: Keeping the Value Exchange Alive

GDPR Consent and User Data Mining

GDPR Consent and User Data Mining

The informational world is an economy, and the content that people have become familiar with accessing for free does not exist without a viable value exchange. For example, your favorite BuzzFeed videos are produced on a budget that needs to be matched by revenue. If the revenue is not there, jobs get cut and revenue streams need to keep diversifying. And with many publishers, adding audience data into their mix of products seems like the perfect way to build value in an already saturated market. Whether it’s offering their commerce partners data on what products or website designs are on trend with their audience, or even consulting on discount strategies, the digital media ecosystem no longer just consists of Content, Audience, and Ad Money, but also the sale of Data.

But how will this value exchange hold up in Europe (and possibly the rest of the world) as GDPR brings a shift in assertion of people’s rights to have control over their data? Will greater transparency strengthen or undermine this value exchange?

 

Your data is more than the sum of its parts

Our use of the Web generates valuable data that companies use to improve their products. For example, when you decipher distorted numbers in an image to prove you aren’t a bot in one of Google’s CAPTCHA tests, you are probably helping Google to recognize street addresses its cameras picked up but couldn’t read. This is one of the few ways in which the data exchanged is disinterested — in other words, the value of the data didn’t rest in revealing information about who you are.

But when data can paint a more complete picture of who you are — how you communicate, where you spend your time, your relationships, life choices and risk appetite — it’s a different story. It’s not just your privacy that’s invaded but your ability to avoid data-driven discrimination that’s at stake.

The ability to corroborate information from big data and draw fairly accurate assumptions about people has rendered previously innocuous data about you more sensitive than before. If you’ve never heard of browser fingerprinting, it’s done by combining data on various browser properties like your browser version, extensions and fonts you’ve installed, and the websites you’ve logged into, to track you as an individual (check how unique your browser is with EFF’s tool). This is why the ePrivacy Regulation (undergoing approval) is tightening restrictions on the use of cookies by requiring explicit, rather than implied, consent.

 

Data is intertwined with your life offline

To illustrate how data use is transforming in this information age, China has plans to launch a social credit system by 2020 to rate citizens’ reputations based not just on their spending habits but other subjectively assessed metrics like their qualifications, social network, online behavior and political agreeability. In essence, the aim is to gamify good behavior with rewards such as priority in school admissions and employment, and punishments like restricted access to licenses, social services and even travel. If users didn’t have control over how their data is being collected, their digital trail can become a tangible liability.

While China’s proposed social credit system may seem dystopian to some, algorithmic profiling isn’t anything new. In the US, it already drives credit scores, hiring processes and even the evaluation of a defendant’s likelihood to be a repeat offender or even to show up for their court appointments. Last year, a Wisconsin judge decided on a six-year prison term for a man who fled the police, in part because he had been identified through a software-based assessment as an individual “who is a high risk to the community.” Some argue, however, that data-assisted judgement can allow more comprehensive information to enter consideration and therefore may be fairer. However, with proprietary software generating such analyses, defendants have no way of questioning the algorithms. And if you remember from our previous post, machine learning can fraught with data integrity threats.

The impact of your data trail also doesn’t stop there. Information gleaned from you and your friends’ interactions on the Web can be manipulated to influence your lives on a level deeper than most would imagine. To get a sense of just how intimate data can be, simply look to how Facebook was able to conduct psychological experiments on almost 700,000 users back in 2014 to determine whether it could succeed in altering their emotional states. By manipulating exposure to positive and negative “emotional content” from others in their personal network, users could be subconsciously made to adjust the mood of their own posts accordingly. While Facebook’s research didn’t amount to any data-based discrimination, it had potential for it. Imagine the delight of any marketer or salesman in obtaining a dataset of emotionally pliable users.

As you can see, the potential cost of user data in the informational world is largely obscured from most users’ consciousness. Therefore the value exchange, often conceived as a trade between ad impressions and site content, isn’t well understood.

 

We have more at stake in disclosing data and accountability should rise in proportion

So with 28 member states of the EU undertaking to incorporate the GDPR, a clear stance is being made on how the relationship between users (data subjects) and data controllers/processors should be defined — it starts with consent. Consent that is “freely given, specific, informed and unambiguous.

Personal data generated from the use of digital products allow for constant adaptation to each user’s preferences. This makes data collection not just a vital process of growing business asset but, from the users’ perspective, a means of value enhancement. However, when businesses choose not to share what they use data for, consumers have no guide for setting expectations on what counts as a fair value exchange.

According to a survey published in the Harvard Business Review, different data types vary in perceived value. Users may expect higher returns for their data, depending on how sensitive data is as well as whether its use primarily benefits the company or the user. For example, data could be less sensitive when it’s voluntarily shared, versus when it’s implicitly generated from product-use. Data use also primarily benefits users in the context of product improvements, and primarily benefits the company when it’s sold to third parties.

Matching value for return will therefore be a major ground for competition in this data economy, and as the research suggests, explicit consent and brand trust will become the key to “ongoing and even expanded access” to personal data.

 

Consent keeps the value exchange viable

We have a lot more at stake than we used to, when we share our data. And GDPR’s consent mandate brings us closer to reconciling that with our need for data-driven technology and its potential to make our lives better. When the value exchange no longer proves fair to us, we should also have the means for withdrawing our participation. By requiring consent for data collection and use, users will better grasp the value of data being provided, what they can expect in return, and assume greater responsibility for their data trail. At the same time, through restructuring their data management in anticipation of the new regulations, businesses build trust and uphold best practices by taking a risk-based approach to evaluating data needs, retaining only what they are ready to offer returns on.