Information Minimization: A Pillar of Data Security, But More Than That Too

This is the third in a series of article about impressive’s proposal for a data reduction requirement to limit commercial security and secure consumer privacy. In our first post, my colleague Suzanne Bernstein explained that data minimization is a framework for limiting the collection, usage, transfer, and retention of individual details and talked about how minimization is a way to fulfill the affordable expectations of customers concerning the use and defense of their individual information. In our second post, my colleague Sara Geoghegan spoke about the harms that typically flow from secondary uses of personal data and how function constraints– a critical part of any data reduction structure– fit into the Federal Trade Commission’s authority to regulate unfair industrial information practices.

Today’s post highlights the role of information reduction in information security while likewise highlighting that a robust minimization framework must do more than secure versus breaches and unapproved gain access to of individual information.

Last summer, the FTC published a lengthy request for comment signaling that it planned to adopt brand-new rules governing the business processing of individual information– something EPIC had previously urged the Commission to do. In November, impressive submitted substantial comments with the FTC setting out the scale of today’s data personal privacy crisis, going over the Commission’s legal authority to establish robust rules, and identifying specific harmful organization practices that the Commission need to regulate. Those remarks cover a lot of ground (summary here), but they lead with impressive’s long-running call for an information minimization guideline– specifically, a declaration by the FTC that:

It is an unjust trade practice to gather, utilize, move, or maintain individual data beyond what is fairly essential and proportional to the primary function for which it was collected, constant with customer expectations and the context in which the data was gathered.

By specifying the above as an unjust practice, the FTC can open the capability to impose significant fines on lawbreakers that gather, procedure, and transfer extreme individual data. Briefly specified, the Commission’s authority to provide trade rules reaches company practices that are both (1) unjust or misleading, and (2) widespread. An organization practice is thought about “unreasonable” if it’s likely to cause considerable injury that customers can’t fairly prevent and which isn’t surpassed by countervailing benefits to consumers or competition, and it’s thought about “deceptive” if it includes a representation or omission most likely to misguide customers. To establish that a damaging service practice is “common,” the Commission can rely on two types of evidence: (1) past stop and desist orders concerning that organization practice, or (2) “any other details readily available to the Commission indicates a widespread pattern of unfair or misleading acts or practices.”

Data Security Means Data Minimization

Along with industrial monitoring, information security is one of the 2 primary areas of concern highlighted in the FTC’s initial proposal for the (appropriately called) Trade Regulation Guideline on Commercial Security and Data Security. Impressive has frequently slammed the FTC’s failure to protect the privacy of consumers over the previous 20 years. However a relative bright area has actually been the Commission’s information security work, particularly the cases the FTC has actually pursued versus services that stop working to protect personal information from breach or unauthorized gain access to. Because 2000, the Commission has actually brought more than 90 enforcement actions declaring unfair, misleading, or otherwise illegal data security practices by business delegated with consumers’ individual information. This consists of cases versus significant companies like Microsoft, Twitter, Rite Help, CVS, Petco, DSW, Oracle, Snapchat, Fandango, Credit Karma, Wyndham, PayPal, Workplace Depot, Equifax, Zoom, and Amazon. The Commission has also used its targeted statutory authorities to develop data security requirements through the Safeguards Rule and the Kid’s Online Personal privacy Protection Rule and to flesh out reporting requirements for companies that fail to safeguard personal health information through the Health Breach Notification Rule.

This body of information security work is a crucial foundation for the FTC’s rulemaking. First, it helps the Commission develop the accurate and legal basis for adopting trade rules on information security. It’s tough to look at the steady drumbeat of FTC enforcement actions over the past twenty years and reject that damaging data security practices prevail in today’s economy or that customers suffer a panoply of personal privacy and monetary harms as a result. (You’ll discover a lot more on this subject beginning on page 181 of our remarks from the fall.)

The FTC’s work in this location also highlights some of the particular elements that ought to be included into industry-wide rules on information security. As we kept in mind in our comments, the FTC’s data security cases plainly develop (1) that it is unreasonable for services to stop working to maintain sensible administrative, technical, and physical procedures to secure consumers’ individual data versus unauthorized gain access to, and (2) that it is deceptive for businesses to claim that they secure the individual information they gather and process if, in reality, they fail to implement the essential safeguards. The Commission has actually likewise recognized a few of the common trademarks of inadequate information security, such as failing to maintain vulnerability disclosure policies, failing to patch known software vulnerabilities, failing to segment database servers, saving social security numbers in unencrypted plain text, transmitting individual info in plain text, and failing to perform vulnerability and penetration testing as part of prompt security reviews.

So what does all of this pertain to information reduction? Quite a bit, it ends up. The relationship in between information security and data reduction is possibly best summarized by the maxim “You don’t need to secure what you don’t collect.” Every piece of individual information collected and kept by a company is inherently at threat of unapproved gain access to and usage. Technical and physical safeguards are certainly essential to restricting that risk, however one proven strategy is for organization to restrict the data they collect and process in the first location. Practicing data reduction makes businesses less appealing targets for information thieves and hackers, restricts the harm to customers if and when breaches do take place, and fully removes the threat of breach for data components that are never ever gathered to start with. Undoubtedly, the FTC’s data security work has increasingly pertained to reflect this relationship between security and reduction. The Commission’s current orders against Chegg, Drizly, and CafePress all incorporate data minimization requirements, and the FTC’s initial notice of proposed rulemaking on industrial monitoring and information security specifically defines data security to consist of information reduction.

But Data Minimization Means More Than Data Security

Impressive is encouraged by the Commission’s concentrate on data minimization, information security, and the link between the two. But as the FTC lays down commercial surveillance rules, it is very important for the Commission remember another part of information minimization: limitations on how, why, and to what degree entities can process individual data that they are (a minimum of nominally) entitled to utilize and keep. (The FTC and others frequently describe these as purpose constraints; legendary considers them a subset of data reduction, too.) Typically consumers are harmed not by an unapproved user getting access to their individual information, but rather by secondary, out-of-context usages of individual information by the businesses that have actually gathered it. It’s vital that the FTC’s industrial monitoring guidelines deal with both of these scenarios.To highlight

the difference: Twitter has (so far) gone through two FTC enforcement actions concerning its personal data practices. The very first, in 2011, resulted from “major lapses in the business’s information security [that] allowed hackers to acquire unauthorized administrative control of Twitter, including both access to non-public user info and tweets that consumers had designated as private, and the ability to send fake tweets from any account.” The 2nd, in 2022, resulted from Twitter “ask [ing] users to provide their phone numbers and e-mail addresses to protect their accounts,” then “revenue [ing] by permitting advertisers to use this data to target particular users.” (The FTC approached the latter as a misleading trade practice due to the fact that Twitter agreeably misguided its users, but even without that misdirection, we ‘d argue that discreetly using security details for marketing functions makes up an unjust trade practice.)

A data reduction trade guideline that only resolves the previous type of offense (failing to protect personal information versus unapproved access) may fail to record the latter type of offense (using individual information for secondary functions that violate the expectations of the consumer). Such a guideline would welcome more information gamesmanship by Twitter and other businesses and might enable a lot of the most pervasive types of information abuse– e.g., targeted advertising and the tracking it incentivizes– to continue mostly unchecked. A narrow guideline to this effect would be a disastrous missed out on opportunity for the Commission.

A Data Minimization Rule Should Do Both

The bright side is that the FTC is on company footing to attend to both kinds of information security violations through a minimization rubric. First, the Commission’s own enforcement precedents extend well beyond inadvertent data security failures to deal with a lot of the damaging personal data practices baked into companies’ core business designs. Just in the past few years, the FTC has taken action versus Microsoft for illegally maintaining children’s information, against Everalbum for its abuse of facial acknowledgment technology, against Impressive Games for unreasonable default personal privacy settings, versus OpenX for wrongful collection and usage of geolocation information, and versus SpyFone for surreptitious data harvesting and usage. The Commission needs to construct on these and other cases to adopt a rule that develops function constraints on using personal information and imposes liability for standard information security offenses.

Second, as noted, the Commission’s commercial monitoring and information security guidelines require not be restricted to the specific practices identified in prior FTC enforcement actions. If “any other information readily available to the Commission suggests an extensive pattern of unjust or misleading acts or practices,” those practices can be correctly managed by the FTC through a trade guideline. This includes hazardous secondary usages of individual information that the Commission might not have targeted yet through case-by-case enforcement. EPIC and dozens of peer organizations have collected large proof of damaging data practices that the FTC need to rely on in scoping its regulations, and every day brings new and disconcerting media reports about the U.S. information security crisis that further support the need for robust commercial surveillance rule.

Finally, the Commission must acknowledge (as it has actually begun to do) that information security safeguards and purpose constraints essentially safeguard versus the same types of harm to consumers: undesirable, unexpected, and out-of-context uses of individual information. The abuse of a person’s individual info naturally triggers harm by depriving them of control over their information– no matter whether that misuse is the work of a hacker or a business entrusted to protect the person’s information. This concept is reflected in the Commission’s recent enforcement of and assistance worrying the Health Breach Alert Rule, which explains that a “breach” includes “a business’s disclosure of covered information without a person’s permission.” The Commission should continue to clarify this positioning of personal privacy and data security protections as it promotes commercial monitoring guidelines.

Legendary is encouraged by the FTC’s attention to data security safeguards, but it is important that the Commission seize on this rulemaking opportunity to set substantive limitations on secondary usages of personal information. An effective data reduction framework must be structured to cover both.

About the author

Click here to add a comment

Leave a comment: