A podcast Connecting the Dots, Exposing Threats and Navigating Cybersecurity

 

Episode 6: Asset or Risk? Is Your Business Adapting to Required Data Privacy Regulations?

 
 

“So, then it is identifying an individual. Knowing that that individual – you don't have to know their name. Always try to remember the fact that you may not know the name of your neighbor, but you can certainly identify them. Same kind of thing. You don't have to be able to attach this online information to a person's offline world in order for it to count as personal information.”- Nicole Killen, VP and Chief Privacy Officer, Neustar

Episode 6 explores the ever-changing world of data privacy regulations, as John and Paige take a deep dive into the considerations, legislation and risks associated with the compliance burden of data privacy and its integration into the deepest levels of your business.


Have feedback or a cybersecurity topic you would like us to dig into on this podcast? We would love to hear from you! Drop us a quick note at lockandshield@team.neustar.

If you have any questions for Brian, you can contact him at bkime@forrester.com.

Highlights:

  • The Deep History of Privacy Protection – The surprising beginning of data privacy and how the invention of the internet helped further the need to secure personal and private information.
  • Personally Identifiable or Identifying Information – Learn how identifiable information can lead to potential risks for businesses that misuse and mis-handle collected personal data.
  • GDPR and CCPA Regulations – Discover the role global regulations play in the development of privacy protections right here in the U.S.
  • What You Need to Know About Transferring of Data – Learn how the protection of data in one country still applies to that data regardless of what country it has been transferred.
  • The Future of Regulations in the U.S. and Globally – A brief look at where regulations are headed around the globe as well as right here in the U.S., and how prepared your enterprise needs to be to insure you’re being privacy-friendly.

Discover how Neustar can help secure your organization online.

Contact Us

 
 

Episode Transcript

John McArthur: Welcome back to the Lock and Shield Podcast presented by Neustar. Where we connect the dots for you, exposing threats and discussing the security issues both small business and large enterprises need to navigate. I'm your host John McArthur, Director of Security Intelligence with the Neustar Security Solutions Group. And back with us today is Paige Enoch, Product Manager for UltraGeoPoint and UltraReputation products. Paige, how are you doing?

Paige Enoch: I'm great. Thank you. Thanks for having me.

JM: Great. Great to have you back. I should say Paige was pretty focused the last few weeks on product deliverables for the GeoPoint-Reputation products. So it is good to have her back for the discussion. Now today's discussion is a deep dive on some of the data privacy considerations we encounter as product managers of data feed solutions. As more and more data privacy regulation is enacted, such as GDPR or CCPA – and we'll talk about what those mean shortly – we often have requests for information around collection methods, or just general information about how we adhere to these new policies. And as Paige and I think about launching new products, we meet with our privacy team here at Neustar to confirm adherence with current regulations. So joining us today to help speak to these considerations is our own Nicole Killen, Neustar's Chief Privacy Officer. Now Nicole has over 15 years of experience in technology law, primarily focused on transactions and product counseling. Her focus shifted around 2010 when businesses not already subject to data privacy regulation – like healthcare and finance – started to become under scrutiny. And it became clear that a new kind of legal specialist would be needed. One focused on integrating data privacy into the deepest levels of all business units and functions. She is focused on leveling-up discussions about data privacy to understand how we got where we are. And how to move forward in a manner based on common sense. Nicole, thank you so much for being on the podcast. It's great to have you here.

Nicole Killen: Thanks for having me.

JM: Now, as we get started let's just start with the basics. Can you give us a quick overview around the history of digital privacy? With the rise of the internet, was it always an issue? Or is it more of a recent concept?

NK: Sure. I'm happy to start with digital data privacy and skip over all the developments in the concepts of general right to privacy. A right to privacy and information. There's a lot of potential content there. But digital privacy really became an issue in the sixties with the rise of the computer, of the mainframe. The computer didn't merely increase the amount of information that entities could collect. It changed how the data was organized, accessed and searched. Unlike manual systems such as telephone books, computers could be programed to sort or recognize data on the basis of particular index, attributes, characteristics. And it didn't have to be a single index. It could be a person's first and last name. It could be attributes of that person. And these things could be mixed and matched and searched through a database. And in the early seventies, it was actually the US Department of Health, Education and Welfare that really started the discussion of what are the principles of data privacy? Now, we all know about GDPR. We know how privacy forward, the EEA and the European Union is. But I like to remind people that the principles underlying all of this started in the US. And so in the seventies, in the context of educational records, really. And health records. And the fact that now we had all this information that could be searched all these different ways. This agency put out a report called the HEW Report in 1973. And that identified some basic principles that still guide us today in all of these laws. In the 1980s these principles were actually taken out internationally. And the Organization for Economic Cooperation Development that sets international standards in this area adopted what's commonly known today as the Fair Information Principles. And there's eight of them. And if you look at those eight, and then you look at GDPR, you look at CCPA, you look at the other states and countries that are coming online. They all come within those principles.

JM: So this isn't something that just sort of started in the last 5, 10 years with social network and use of cookies for targeted advertising. This is something that has a deep history. It sounds like really starting around health records, education records.

NK: Yes. Exactly. And in the United States, that kind of sectoral focus is where it has stayed up until recently. Meaning you have privacy, principles in sectoral laws. Those governing health care. You of course have HIPAA. And you of course have COPPA with regards to children's online activity. You have financial regulations regarding your financial records or your credit records. And it kind of stays siloed like that. Whereas in the EU, especially with the European Economic Area. And then the EU developing. They have a more overarching. All sectors are kind of brought under it. A form of privacy law. And that's actually the norm of most countries. The US is kind of the odd man out on that. So California started the roll. And it's moving on from there.

JM: So as we talk about moving on from this regulation being enacted, I guess one of the basic questions – and we talked about this in preparation. I was sort of looking for this black and white definition, if you will. Or answer to what is PII? What data elements make up something that's personally identifiable or identifying information? And you mentioned – when we talked about this. You said PII is more of a data term. Excuse me. It's more of a dated term. Out of date. And instead we need to be thinking about just general personal information inclusive of both something that's identifiable or identifying.

NK: Yes.

JM: Can you sort of break that down? And why the change?

NK: Yeah, I am kind of strict on this internally, because I want people to be using consistent terms. If we're not speaking the same language then we're not coming to the same conclusions. PII is a very common term, but it actually has no consistent definition. It's been used since the early seventies. Again in these sector-specific laws. Sometimes without definition at all. Other times with a definition, but the definitions are different between the laws. But generally it's become a part of our vernacular as – its type of personal information that people are generally more concerned about. What they generally think of as personal information. So that would be things like your name in combination with – because people, you know, I think James Smith is the most common name in the US. There's a lot of common names. But that connected with a unique identifier. Such as your Social Security Number. Your driver's license number. Those unique numbers themselves are identifying information, because there's only one of them. And then information that enables a person to contact you directly. Your mailing address. Your direct mailing address. And these days your email address. Your social media handle. Your username for your mobile app communication device. Things that allow a person to directly contact you. Those are the things that people generally think about as PII, but as I said there is no common definition. And whether you're talking about PII or you're talking about the broader term PI, personal information or personal data – and I'll use those interchangeably as we talk about it. It's still covered in the same way by the same statutes. So I want people when we're talking about this internally in compliance with the regulation, to not be thinking about oh, but I'm not using this type of information that the general public deems more attached to them. More important to them. I don't care if you're using PI. And that's what the statute is governing. So that's why I'm kind of strict about it. But I understand why people want to use the term PI. And why businesses want to use it in say their privacy notices that consumers read, because it is important to make clear to them if you are not collecting. Or you're not using. Or not using in a specific way the types of personal information that you know people are more concerned about. That they deem more important to them. So yes, you want to talk about it that way. But when we're talking about it and whether we're complying with the regulation or not, or comply with the regulation or not, we're going to talk about PI.

PE: Thanks Nicole. That definitely helps add some clarity around those terms and exactly what we're talking about. If we're thinking very specifically about UltraGeoPoint or UltraGeoPoint products, or which we offer geolocation data and other network attributes around IP addresses, how can we think about PI in that context. For security business, should we be considering those attributes PI?

NK: Absolutely. PI, personal information or personal data, is defined in the GDPR and the CCPA and the new laws that are coming online. But we can generally think of it as John said when he used the words identifying or identifiable. Identifying, pretty easy. You can just refer back to PII. Kind of deem it that. Identifiable is more difficult. That can be not just the information you have, but in connection with other information that you may not have. And that you may feel that you really don't have a real ability to even get. But if the information you have can be linked with other information that then becomes identifying, that's identifiable information. And things like online identifiers. Whether it be IP address. Whether it be a cookie. Whether it be a mobile advertising ID. A device ID. That is deemed to be able to be identifiable. That if you had say IP address, and the ISP was willing to give you their records that you could then identify that person. That's not reasonable. But also when you get down to it, when you have these elements even if themselves are not identifying, you're not usually throwing them all in a bucket. You're usually putting them in a database that is attached to some sort of other unique identifier. So then it is identifying an individual. Knowing that that individual – you don't have to know their name. I always try to remember the fact that you may not know the name of your neighbor, but you can certainly identify them. Same kind of thing. You don't have to be able to attach this online information to a person's offline world in order for it to count as personal information. You have this group of information that you know belongs to one individual, Individual X. That is personal information. Now whether Individual X can come along and make a request under the applicable law, whether they want to know what data you hold on them. Whether they want you to delete it. What they want you to stop processing it for some reason is another story, because you of course have to be able to know that the person contacted you is in fact Person X. But that doesn't mean – that problem doesn't mean that it's not personal information, and doesn't have to be treated as such under the statute.

PE: Got you. So it sounds like IP address is certainly classified as PI.

NK: Absolutely.

PE: Though it would take additional datapoints to create a bigger picture. And is also further compounded by the problem that IP address behavior can change quite frequently. And may not truly be linked to an individual. So the IP certainly represents the connection point for the device, but that could be a Starbucks where the user is connecting to a hotspot. Also the IP address space is consistently reassigned quite frequently. Various ISPs will cycle through IP addresses. And the linkage between an individual and an IP is not generally not static.

NK: Exactly. Exactly. But that's why an IP address is generally used in connection with a grouping of activities. A series of activities. So the thought is, is that the one-offs that don't match, but I was at Starbucks. That would be connected with an identity or a profile that you're looking at that doesn't kind of stand out as having any sort of pattern. Or something that you would actually rely on.

JM: Now, as we think about – and obviously we were talking about GDPR and CCPA. Sort of what does the current regulatory environment look like? We know those two regulations have passed. Maybe you could talk a little bit about what each of those sort of mean, what they're purporting to do. And I think the other question is what else – those are the two big ones we hear about, but what else is going on globally that we should be aware of from a privacy regulation standpoint?

NK: So the GDPR is kind of the baseline here. So every other law they'll talk about, whether it is GDPR-like or not. And GDPR-like, in my mind, has some general principles. Again, going back to the seventies, and what the HEW report talked about. You know, the concept that you need to be open. That you need to disclose to people the purposes for which you're collecting and using their information. And it'll vary between statute of whether you need to get consent. Or if you just need to disclose that particular use. You have to then use that information only for the purposes that you disclosed for the consent for. And when those purposes are fulfilled, you need to delete that information. You no longer need it. You need to get rid of it. These are all kind of data minimization principles that every company has to take a fresh look at. It's especially with the rise in cloud storage. For a while there companies got really used to the fact that storage was cheap. We should keep everything for as long as we could possibly need it. They became data hoarders. We may need it at some point in some time in the future for some reason. And you can't do that because you told the individual you were going to use it for this purpose. You've used it for that purpose. That purpose is done. You need to get rid of it. So that's kind of the first part. And the second part is, it gives people certain rights to control their information. That may be to find out in more specific detail – that is standard in your privacy notice – what information you have in your systems that relates to them. The right to delete that. Of course, both of these are subject to certain limitations and exceptions. And the right to tell you to stop using it, commonly known as processing it for a particular purpose. Such as stop selling it as the CCPA uses the term sale. Stop using it for purposes of behavioral or targeted advertising. That is a new purpose for which you can restrict use of information under the CPRA. Which by the way did replace the CCPA already, it's just that it's not going to be fully enacted until January 1, 2023.

JM: So there's already been a change?

NK: Yes, but that's because it's California, John.

JM: There you go.

NK: Also the other state laws that have that did manage to get through last session. Virginia and Colorado. Virginia will become effective the same date as the CPRA, January 1, 2023. In Colorado, it will become effective July 1, 2023. Both of those also follow the same kind of rules that a person can opt out of the sale of their personal information. And the use of their personal information for behavioral advertising. Those laws also bring into play the ability to opt out of use of your quote "sensitive information" for any purpose. Of course, again, they're subject to exclusions. If you have that because you need it to provide the services that a consumer has asked for. Of course, you have to keep it. If you need it because you are in a – the law requires you to keep it. You've been subpoenaed or something. Of course you can keep it. There's always exceptions, but we're just talking about general kind of rules here. So those laws have kind of broadened the details, but again the general principles remain the same. The individual should have a good level of control over their information. The third principle is basically security. Protect this stuff. As we've discussed before John, you know, security is a big part of privacy. That you can have privacy without security. Other way around though is not true. You can have security without having privacy. So to have privacy you need to have secure systems. And then the last one is be accountable. Make sure that you are keeping records and can show that you've done the things that you needed to do to comply with the first three. Keeping those principles in mind, principles that were set out in 1973, will get you through all of this. I'm now saying that it's easy then to kind of carve out the rights of a visitor coming from one state versus another state. And the fact that one state may – the GDPR provides for a 30-day period to generally fulfill data subject requests. Whereas the CCPA has 45 days for access and deletion. But 15 days for opt-out and sale. And I'm not saying that those things aren't going to change and be different. And those are major operational issues to have to jump over. But the principles that you need to keep in mind to feel comfortable that you are being privacy friendly are all the same.

JM: Understood. [Music] Hi. It's John. I wanted to take a break from the podcast to dive a little deeper into the history of privacy. As we're discussing on today's podcast, the roots of privacy law go back decades before the creation of the internet that we know today. In fact, it can be traced to 1967 when the book Privacy and Freedom was published. From blog post on osano.com the book's author Alan Westin is considered the father of modern privacy law. Before publication of the book, the concept of privacy centered on limiting the government's control over individuals' bodily autonomy. Images and photographs, or setting guidelines around wiretapping. In Privacy and Freedom, Westin defined privacy as the claim of individuals, groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others. The blogpost continues that Westin argued citizens should retain ultimate control over their personal data. Including how much of the personal information is disclosed and to whom, how it should be maintained, and how it should be disseminated. Therefore creating the groundwork for current understanding of online privacy laws way before the internet was even a thing. In 1972, Westin followed up the success of Privacy and Freedom with his book Databanks in a Free Society. There he more deeply addressed the growing practice of computerizing personal records in government, commercial and non-profit organizations. Westin worked closely with senators on drafting the Federal Privacy Act of 1974, which was the first law to de-limit the gathering and use of personal information by the federal government. This would set in a legislative framework for how privacy issues would be treated by the United States government. And also made Westin a popular consultant across the globe. He was in high demand with business, governments and advocacy groups. It's interesting to note that despite what seems a more recent focus on our data privacy, these issues have existed for decades. With modern privacy legislation having its roots back in 1967. Now back to the podcast.

PE: So in thinking through some of the details around GDPR and the implementation there, one of the major questions that we often get asked by prospects is around data in transit, and data at rest. What is the significance of how or where the data is stored? And how does GDPR define that?

NK: Right. So GDPR, you know, with the European Union, and with many other countries that actually have what is considered GDPR-like laws. Like Brazil, which Brazil has been active for a while. Enforcement just began the beginning of this month. They all have the concept of data sovereignty. And by that, they want to make sure that they are protecting the individuals who have the protection under their laws. And note I'm not saying EU citizens, because it's more complicated than that. But the people intended to protect the subjects and their GDPR. They want to make sure that wherever their information goes in the world, that it is subject to the same level of protections. And that brings in the concept of transfers. And transfers are what you would normally think of as transfer as data was in the EU. And it was transferred to a database in the US. Clearly a transfer. But transfers also commonly understood as a person in a different jurisdiction accessing the data. So if you're in the US, and you pull data or you just process data – you apply some analytics to data that is sitting in the EU, a transfer may be deemed to have occurred. So in those cases you need to make sure that the place that you're transferring it to gives data subjects the same protections. Now unfortunately the US and the protections federally that are offered are not quote "adequate" under the GDPR. And therefore we need a different transfer mechanism. And this is kind of how I got started in the business generally – in this particular focus in my legal career.

JM: It's quite a specialty. I will say that.

NK: Yes. It is. But it came along when there was this thing called Safe Harbor when I started really practicing B to C. You know, business to consumer laws. When I was really dealing with consumer issues. And I learned from Safe Harbor was. It came around in 2000, because I started doing this about 2008 in the consumer sector. And what it's all about was since the US is not adequate – and I like just saying that – there had to be a mechanism put in place. So the EU Commission – the commission that is kind of the agency – executive agency side of the EEA and, you know, handles the GDPR – negotiated with the Department of State a set of rules that would be followed. And that if companies signed up to these rules with the State Department, and had mechanisms for policing those and solving conflicts under those, that that would be an adequate transfer mechanism. So great. Everybody followed Safe Harbor. Then the European courts took a look, obviously because a suit was brought, but they took a look at Safe Harbor and said hmmm “no we don't think so”. Not good enough. So in 2015, they say Safe Harbor fell. And pretty quickly a new one was negotiated, Privacy Shield. Great. We have another transfer mechanism. Well, sorry guys. Privacy Shield fell also. So we're going back to another mechanism. A mechanism that has been around for a long time, even before GDPR, called the Standard Contractual Clauses. And these are truly standard. And we actually have new ones that if anybody doesn't know that deals with them, they need to move all their agreements over to the new ones, and pretty quickly. But these are again, it's just standard agreements that with every like data processing agreement. DPA, that you sign with a company. If European data is involved, attached with it will be the Standard Contractual Clauses. And there are some parts you fill out about what exactly you're processing. How you're processing it. You have to set forth your security requirements. But these are set terms. So they're not terms that were negotiated between the EU Commission and the State Department. These are terms that have been agreed to as adequate. And that's what we're kind of working on now while again the EU Commission and the State Department work out another mechanism. And we'll see how long that one lasts.

JM: Given sort of the fluidity around some of these regulations, let me ask how are companies responding to the regulations? And have you seen the compliance burden differ as we look at smaller versus larger companies?

NK: Yes. The compliance burden differs in that – and it's not necessarily that larger companies have larger data burdens. It really depends upon the sophistication of your operations. If you're in a business that relies on personal information, that you collect a lot of it, that you process it, that you disseminate it to other people, you're going to have a really complicated burden. If you are a company that runs a website that provides content that people really like, and that you monetize and are able to support that site by showing advertisements, your burden will be less. Your compliance, the complications will be less. Not to say that those still aren't complicated, because whether dissemination of someone's IP address and cookie information throughout the ad-tech ecosystem, whether that is a quote "sale" or not under CCPA, is still something that people generally disagree about. But if you're not touching a lot of personal information, especially offline personal information, the burden is going to be less. But you can't get in this business where you're collecting all this information, especially offline information. And using it in all these variety of ways. And not face this burden. It really is a barrier to entry. And if you've already done it, then I don't know what to say, because you may not be able to continue if you started out small, and now you're subject to all of these requirements. But quite frankly, if you're collecting all that personal information, you should have been a little scared already. If a breach hits you, you would've already had massive trouble.

JM: Hey, there. I want to take a quick break from the podcast and talk to you about our UltraGeoPoint Database. Are you looking for an easy-to-integrate IP geo database? UltraGeoPoint provides powerful IP geolocation and proxy data to help you identify and block fraudulent transaction, deliver OTT and streaming media, ensure compliance, and mitigate security threats. The UltraGeoPoint data is made available in a few different ways. Via RESTful API and on-premise virtual server, and flat file to seamlessly integrate in your application stack. And this summer we are launching a Splunk technical add-on to allow Splunk users to leverage the insights of UltraGeoPoint in their security and traffic management use cases. To learn more, please visit home.neustar and navigate to the security solutions section.

JM: That makes perfect sense. It's not a question of how large a company is in dealing with its compliance. It's really about what are you collecting? And given what you're collecting, if you're not large enough to sustain, you know, the right sort of – the expertise on staff, it may not be the right business model for you.

NK: Exactly.

JM: Got it.

PE: So we've talked a bit about some of the recent policies in Virginia and Colorado. But could you give us an idea of what's on the horizon globally and within the US as far as future regulations?

NK: As I said, the CCPA is already preempted by the CPRA, but it's just not going to go into full effect yet. There were several states that at the end of the last legislative session had GDPR-like laws in play. Including Washington State, Maryland, West Virginia, Connecticut. They failed to pass those laws by the time that legislative session ended. They're expected to pick it up again. And try again. I'll note that Washington State, this will be their third try in getting it done before that particular legislative session ends. And again, all of those laws follow the same fair information principles, but are slightly different. And treat things slightly differently. Which is one of the reasons why I think of course states are not looking unfavorably upon federal action in this area. Which over the years, there has been. There's been attempts. There's been bills introduced that have gone to committee even. But as we know, the world has some big things going on these days. The United States has a lot to handle. And none of those have gone across the finish line. And generally there's a couple of things that people generally believe are the main holdups to those. The big contentious issues. One being whether violation would lead to a private right of action. Private right of action meaning if a statute is violated that the individual whose information was not properly used under the statute has a right to sue in their own capacity. Rather than the statue only being enforced by a regulator such as the agency. Such as the Attorney General in a state. Or the federal privacy laws generally seen as something that will be regulated by the FTC. And the reason that a private right of action is something that's contentious is that when an individual can sue. When there's a group of individuals who are similarly situated, you can have a class. And hence, a class action lawsuit. Obviously, a contentious issue. So that's the first part. The two sides of the aisle would have to be able to come to agreement on. The other issue is preemption. Does the federal law preempt any state law on the same issue? And there's different kinds of preemption. It could preempt on an actual conflict of the provisions. It could preempt any action by the state in this area. It could allow for the federal law to be a floor, and allow the states to provide stricter requirements. So that's another thing that the two sides of the aisle have to come to agreement.

JM: Let me just ask about that real quickly. Who is the ultimate arbiter of if federal law can preempt state law, or vice versa? Where does that take place?

NK: The federal law here would come under the commerce clause. Which is the general law that would – obviously, we have interstate commerce when we're talking about this kind of stuff. So it is an area that has been given by in the Constitution as an area that the federal government regulates. By that, combined with the supremacy clause, anything the federal government regulates has the right to make laws for, as opposed to everything else that's left in states. Those laws are supreme. They are the supreme laws of the land. But the actual statute, the wording of the statute, the intent of the statute will indicate the level of preemption that was intended by the federal government.

JM: Understood. And I guess then if there did continue to be conflict, this is when you would see something actually get raised to the level of the Supreme Court to settle.

NK: Yep.

JM: Understood.

PE: That sounds like a lot more regulation and legislation to come there.

NK: By the way, it would be a good start – I mean, there are uniform laws now being proposed by different kinds of organizations that deal in privacy or consumer laws. And having that kind of base for federal law would certainly be welcome. And helpful, because there's always going to be no matter who is making the law, down to even your local government. There's always going to be ambiguity. There's always going to be things that are – people differ on their construction of what the law says and how it was meant to be applied. That's always going to happen. So the fact that it's going to happen at the federal level, I don't think is a huge deterrent to just trying to get something at the federal level.

PE: Got you. So switching our perspective a little bit to consumers. We talked a lot about the legislative perspective. It seems that there's a general awareness of the big name regulations. So GDPR, CCPA. But there does seem to be kind of a lack of education around what that actually means for a consumer. Could you speak to that a little bit?

NK: Sure. I mean, I think GDPR is better understood by Europeans than CCPA is understood by Californians. And the reason for this is that the GDPR really follows the common understanding about the need for privacy that Europeans have. And that they have based on history. You know, it's really focused on protecting their civil liberties. Their right to practice a religion and not be persecuted for it. And things that really humanitarian types of things. And that's kind of what it's focused on. Though it of course has the IP addresses and all of that stuff. And there is the e-privacy law called the Cookie Law that we've been waiting for years to be redone. The new version of that was supposed to happen the same time as GDPR, and that didn't happen. So it's not that they're not focused on the internet and advertising on the internet. But the common understanding of people on the street is consistent with GDPR. Now you go the CCPA, and the discussions that have been had in the United States – and there of course is a camp of people who are concerned about their personal lives being used against them. Their religion. Their ethnic origin or sexual preferences, or sexual identity being used against them. There is certainly that group. But there's a larger focus than in Europe on online advertising. And the fact that not just the shoes that you looked at are somehow following you around the internet, and you feel violated in your home because of that. That's not really the focus. The focus is that there's information being collected about you based on your activity on the internet. That nobody is disseminating what sites you looked at. Or anything that you would deem personal to, you know, looking at the health question. Or something to that extent. But they are building profiles on you. And those profiles allow for better targeted advertising. Advertising targeted more towards you. And that's the way that the publisher of the website that has content that you go and look at every day, because you really like it. And they don't have a paywall. That's how they keep bringing you that content. But unfortunately sometimes people only look at the fact that you're making money off of something I own. Now ownership of this personal data is a totally different subject that we can get into on a philosophical level. But that's more of the focus in the US. And I think that kind of takes away from the larger discussion that has to be had of understanding how the internet works. How technology works in general. Not just the internet, but any connected device. Anything that you rely upon in your day-to-day activities that if it just stopped, you would be at best really annoyed.

JM: Fair enough. Along those lines, from a consumer's perspective again, what is sort – I'll say the future of data privacy? I mean, is data governance on the path to making sort of these checks and balances? Or maybe understanding how data is used. Is that becoming more of a reality? Is that what's on the horizon here?

NK: I mean, data governance is extremely important for privacy. It's extremely important for being able to deal with these regulations, because without data governance, if you're ingesting all this personal information from different sources, different types. Synonymized data. All this kind of stuff, you have to be tagging it. You have to be putting it in systems with certain access controls. Certain use restrictions. You have to be able to pull it, and if a person's asking for access, collect it and show a person this is the data I have on you. You have to be able to delete it. You have to be able to stop processing it in certain ways. And without data governance, making sure that all of that is organized. You could never do it. You know, the times of just dumping all of your data into a big database and then writing scripts every time that you needed to find something, or possibly having to pull apart an entire database to get to get to one little kernel of – you can't do that. It's not practical. And data governance makes sure that when a consumer has either has consented, or them being deemed to have consented by giving the proper notices, that those rules flow through the data. And can be reasonably complied with, because you can't comply with data privacy regulations if you can't find and parse out the information in your systems. So that's where data governance comes in.

JM: And that makes perfect sense. We're just about out of time, but you've gone over quite a bit of information as we talked about data privacy. I was going to say, do you have any final thoughts before we end here just about data privacy? And just being some general best practices?

NK: Sure. I think going back again to where we started. First principles. Just go back to first principles. I'm sorry to tell you that regulations are going to keep changing. There's going to be new regulations. Regulations are going to be different. And the regulators are rarely going to provide you with the kind of specific guidance for your situation that you would really want. And your own counsel is likely in many circumstances not going to be able to give you that guidance either. But that doesn't mean that there's nothing for you to do. You go back to the fair information principles. And you make sure you're disclosing everything that you're doing. Make sure you get consent where consent is required. Make sure that you have practices in place where you're only using it for those reasons. That you're practicing proper data hygiene. And not keeping everything and purging. Look at that information as yes it's an asset, but it's also representing a risk to you now. Do you really need it? Respect the user's rights to know and to be able to in some circumstances have you delete their information or stop using it for some purposes. And make sure that you're keeping records so that you can be held accountable. Fall back on first principles. And you can always say that you're being privacy forward.

JM: Those seem like absolute common-sense ways to approach this. And I'm still astounded that the regulation passed in the early seventies still holds so much merit today.

NK: Oh, well that was just a report.

JM: That was just a report.

NK: But it was the United States.

JM: Obviously on the mark. Although several decades ago. But with that we're now out of time. And Nicole we say thank you so much for joining us and lending your expertise to this discussion. It was fascinating.

NK: It was great. Thanks.

JM: And Paige, it's great to have you back on the podcast after a few weeks. So thank you for being here.

PE: Thank you. Great to be here.

JM: And finally to our listeners, thank you for listening in. We look forward to catching up with you soon. Thank you.

View Full Transcript
 

Let's Connect

Contact Me