Plaintiffs Allege Security Promises Ring False

John and Jennifer Politi, purchasers of several Ring products, have filed a putative class action lawsuit against Ring, LLC arising out of Ring’s alleged failure to implement industry standard security features into its products.  The case has been consolidated with a similar case that was filed in the U.S. District Court for the Central District of California in December 2019.

The allegations in the class action complaint are certainly disturbing.  The Plaintiffs allege that they purchased various Ring products, including a video doorbell and outdoor and indoor video surveillance cameras.  They allege that Ring’s advertisements include statements that these products bring the purchaser “peace of mind.”  They also allege that Ring represents to its customers that privacy and security is “at the top of [Ring’s] priority list” and that Ring takes measures “to help secure Ring devices from unauthorized access.”

Read more ›
Posted in Internet of Things, Litigation

What Is A “Reasonable Link” Under CCPA?

On February 7, 2020, California Attorney General Xavier Becerra published modified regulations for the California Consumer Privacy Act after reviewing the public comments received on the initial draft regulations.  While the modified regulations provide some much-needed clarity, they also leave some notable gaps.  One of those gaps is the lack of clear guidance on what it means for a piece of data to meet the definition of “personal information” because it can be “reasonably linked” to a particular consumer or household.

The question is an important one.  The Act applies only to those entities that do business in California, collect consumers’ personal information, determine the purposes and means of processing that information, and meet one of three thresholds.  One of those thresholds is that the business “annually buys, receives for the business’s commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal information of 50,000 or more consumers, households, or devices.”

Given the magnitude of internet activity, that threshold may not be as high as it initially appears.  Businesses routinely collect the IP addresses of visitors to their websites and can tell when those IP addresses are associated with a California user.  If those IP addresses meet the definition of “personal information” and the business uses them for a commercial purpose, then, on average, only 140 Californians per day need to access the website for the business to meet the 50,000-consumer threshold.  Yet the business may collect further personal information, such as a name and shipping address, from a much more limited subset of those visitors.  For example, an e-commerce business may log hundreds of thousands of visits to its website from unique California IP addresses, but complete very few sales to California consumers.  Consequently, whether the Act applies to that business may turn on whether the IP address information meets the definition of “personal information” under the Act.     

Read more ›
Tagged with:
Posted in CCPA, Privacy, Regulations

Is Privacy Profitable?

It is evident that a company must invest in its privacy practices to meet legal requirements if it wants to avoid investigation costs and potential civil penalties.  But can investment in privacy, data security, and data management bring benefits to the organization beyond those of bare legal compliance?  A recent Data Privacy Benchmark Study by Cisco suggests that it can.  According to the study, the organizations surveyed realized healthy returns on their privacy spend.  And interestingly, organizations with more robust privacy programs generally got a better return on further investment.  The survey is admittedly subjective and imprecise.  For example, it simply asked survey participants to estimate the value of the return they received on their investment in privacy.  Nevertheless, at the very least the survey gives some valuable insight into some areas that organizations believe investing in privacy and data management has broader benefits.

Operational Efficiencies

Investments in privacy and data management can bring operational efficiencies to an organization.  As a company grows, its data management practices must grow with it.  For example, a small organization may be able to get along just fine with an ad hoc approach to data management that is not formalized, documented, or systematic.  As the business and its data inventory grow, however, such informal systems can become unwieldy and wildly inefficient.  Yet inertia or a failure to prioritize can lead to neglecting investment in privacy and data management.  Therefore, renewed focus and investment in a company’s data management practices can lead to less duplication, improved workflows, and cost reductions.  A well-planned approach is also more scalable, so that the organization can continue to reap the benefits of increased efficiency even as it continues to grow. 

Preventing & Mitigating Data Security Incidents

Investments in privacy and data management can also help companies avoid the costs associated with data breaches and other data security incidents.  Of course, investments in new technologies can help an organization keep its data secure.  But investment costs should go beyond technology as well.  Investments in training programs can ensure that all employees know the content and importance of the company’s privacy practices.  Training can also help employees avoid becoming victims of social engineering attacks that may compromise company data systems.  By investing in training and technologies that will help to prevent data security incidents, companies can save the costs of breach notification, customer ill will, litigation, investigations, and fines.    

Additionally, companies with robust privacy and data security practices can more quickly and efficiently respond to and recover from data security incidents should they occur.  An updated, comprehensive, and rehearsed incident recovery plan can help a company avoid extensive revenue loss by quickly getting critical systems back online after a data security incident.  This is truly a case where an ounce of prevention is worth a pound of cure and continuing investment now can save a company countless dollars later. 

Increased Sales

Privacy is becoming a key touchpoint with consumers.  This is evident in Apple’s recent push to tout the privacy features of its latest iPhone.  This benefit, however, is not limited to companies that look to market privacy overtly.  Both consumers and the law increasingly demand that companies are transparent about their privacy practices.  No company wants to disclose privacy practices that show it is woefully behind its competitors or standard practices.  A commitment to privacy, on the other hand, is likely to result in better sales, brand recognition, and customer loyalty. 

Companies that act as vendors or service providers can also benefit substantially from investments in privacy.  Clients of these companies do not want to risk their own reputations by engaging vendors or service providers with questionable privacy practices.  Due diligence with respect to privacy and data security is increasingly becoming a key part of vendor management.  These companies, therefore, must ensure that their privacy practices meet or exceed industry standards, or else they risk losing key contracts and relationships with their clients. 

Here, investment in privacy certifications can play a key role.  Certifications such as EU-US and Swiss-US Privacy Shield, APEC Cross-Border Privacy Rules (CBPR), and ISO/IEC 27001 or ISO/IEC 27701 can serve as important proxies for signaling an organization’s commitment to privacy.  Investment in gaining and maintaining such certifications can reduce transaction costs by giving potential customers an easily and quickly recognizable sign that a company’s privacy and data management practices are in line with industry standards and best practices.      

Increased Investment

Investing in privacy and data management can make an organization more attractive for investment.  Well informed investors may scrutinize a public company’s privacy practices when deciding whether to invest.  The Securities and Exchange Commission has issued interpretive guidance on disclosure of cybersecurity risks and incidents, recognizing that these subjects can materially affect investment decisions.  Senators have introduced a bill that would require publicly traded companies to disclose cybersecurity expertise at the board level.  In such an environment, a public company that lags behind on its investments in privacy and data security risks leaving investor money on the table. 

Similarly, companies in the mergers and acquisitions market should view investment in privacy and data security as essential to maximizing the company’s value.  Acquiring companies are putting increased emphasis on the privacy practices of target companies in due diligence.  After all, no one wants to purchase a company that is at risk of becoming a financial burden due to costs associated with prior data breaches or sloppy data management practices.  In addition, the more developed a company’s data management practices are, the more cleanly the acquiring company can integrate them into its own systems and operations.  Simply put, organizations that have invested the time and money to ensure their privacy practices are solid and up to date make more enticing targets than those that have not.

Successful businesses are those that properly determine where they should deploy their limited funds to get the best return on investment.  Recent trends show that investment in privacy and data security are an important part of that conversation.  

Tagged with: , ,
Posted in Data Security, Privacy

In Search Of A Federal Data Privacy Law

In the absence of a comprehensive federal data privacy and data security law, states continue to fill the gap. The California Consumer Privacy Act took effect on January 1, 2020, and several other states have similar laws under consideration. Nevertheless, in search of a federal solution, two data privacy laws, one from each side of the aisle, are spurring debate in the Senate. Senator Maria Cantwell (D-WA) and several of her Democratic colleagues have introduced the Consumer Online Privacy Rights Act (COPRA), while Senator Roger Wicker (R-MA) has unveiled the United States Consumer Data Privacy Act (CDPA). Both bills share many similarities, but the differences between them are significant as well.

What Entities Are Covered?

COPRA would apply to any entity that is subject to the Federal Trade Commission Act and processes or transfers covered data. CDPA would cover those entities as well, along with common carriers and non-profit organizations. Both bills have exceptions for small businesses, and both bills define a small business as one that over the preceding 3 years, on average, had annual gross revenues of $25,000,000 or less, processed the covered data of less than 100,000 individuals or devices, or derived less than 50 percent of its revenue from transferring covered data. While COPRA excludes these small businesses completely, CDPA excludes them only from the right to access, correction, deletion, and portability provisions along with data minimization requirements.

What Data Is Covered?

Both bills make a distinction between “covered data” and “sensitive covered data.” Both bills similarly define covered data generally as information that identifies or is linked or reasonably linkable to an individual or consumer device. The COPRA definition, however, also specifically includes “derived data,” which it defines as data that is derived from other information sources about an individual, household, or device. Both bills exclude deidentified data, employee data, and publicly available information from the definition of “covered data.” CDPA also excludes aggregated data from its definition of covered data.

Read more ›
Tagged with: , ,
Posted in Data Security, Legislation, Privacy

Google Partners with Ascension To Store and Analyze Millions of Patient Health Records

Google has confirmed that it is working with Ascension, one of the nation’s largest health systems in a project that will involve the health data of millions of Americans.  Google and Ascension have partnered in a project to store and analyze patient data with the intended goal of using Google’s artificial intelligence tools to enhance patient care and medical decision making.  As a result of this partnership, it has been estimated that over 100 Google employees may have access to sensitive patient data such as name, birth date, diagnoses and treatments.  Such access by Google to millions of patient’s health data has resulted in some concern over how the data will be protected, including a recently announced inquiry into the relationship by the U.S. Department of Health and Human Services’ Office of Civil Rights (“OCR”).  OCR has stated that it “would like to learn more information about this mass collection of individuals’ medical records with respect to the implication for patient privacy under HIPAA.”  Ascension has said that the project with Google has complied with the law and followed the healthcare organization’s “strict requirements for data handling.”

We will continue to follow this important story.  Several other tech companies continue to try to gain a bigger share of America’s health care market, which will all have to be balanced with patient data privacy and security concerns.

Posted in Data Security

New York AG Files Lawsuit Against Dunkin’ Donuts For Attacks On Customer Accounts

On September 26, 2019, New York Attorney General Letitia James filed a lawsuit against Dunkin’ Brands, Inc., the franchisor of Dunkin’ Donuts (“Dunkin’”).

The lawsuit involves security issues surrounding Dunkin’s stored value cards, which customers can use to purchase Dunkin’ food and merchandise.  Customers can create an online account through Dunkin’s website or mobile app, and then manage their card though that account.  Customers can store credit card information in their account to “reload” their cards.

The lawsuit alleges that beginning in early 2015, Dunkin’ customer accounts were targets of credential stuffing attacks (i.e., repeated attempts to gain access to an account through the use of username and password combinations that were previously stolen in an unrelated data breach).  If successful in logging in to a customer account, the attackers could access to the customer’s name, email address, profile id, and the card numbers and PINs for all Dunkin’ stored value cards associated with the customer’s account.  By August 2015, over 19,000 customer accounts had allegedly been compromised.

The lawsuit alleges that Dunkin’ was aware of these attacks as early as May 2015, but failed to take any remedial action for several years.  The developer of the Dunkin’ mobile app noticed higher than expected traffic, consistent with a credential stuffing attack, and alerted Dunkin’ in June 2015.  But, according to the lawsuit, Dunkin’ did not investigate the issue, implement additional security measures, or take steps to identify customer accounts that might have been compromised.

Then, in the fall of 2018, attackers gained access to more than 300,000 customer accounts through credential stuffing attacks.  Approximately 175,000 of those customer accounts had at least one stored value card associated with it.  According to the lawsuit, while Dunkin’ notified the affected customers in November 2018, that notification implied that unauthorized third-parties may have attempted to log in to the customer account, where in fact, those customer accounts had actually been accessed by an unauthorized party.

The lawsuit asserts causes of action under New York law for repeated and persistent fraudulent business conduct, deceptive business practices, and false advertising.  It also alleges violation of New York’s data breach notification law.  The lawsuit alleges that Dunkin’ violated those law by misrepresenting to its customers the steps Dunkin’ took to safeguard customer accounts, failing to properly investigate and provide notification of the breaches, and misrepresenting the nature of the attacks.

The case illustrates the importance of acting quickly to remediate and investigate suspected data breaches and thoroughly documenting the resulting analysis and course of action.  For example, Dunkin’ stated that the accounts breached in 2015 did not contain any customer payment card data, and therefore, customer notification was not necessary.  Comprehensive documentation of the steps Dunkin’ took to make this determination could provide powerful evidence that it did not violate the law.  With regard to the 2018 breach, Dunkin’ states that it properly notified affected customers.  Again, documentation of the steps that Dunkin’ took to identify compromised accounts and mitigate the risk of harm to its customers will be a key component of its defense.

Posted in Data Breach, Data Security

Privacy Primer: Family Educational Rights and Privacy Act (FERPA)

FERPA is a U.S. law, passed in 1974, that protects the privacy of student educational records.  FERPA applies to all schools, from elementary schools to postsecondary education institutions, that receive federal funds under a program of the U.S. Department of Education.  FERPA and the regulations promulgated under it provide a right to inspect educational records, a right to request amendment of educational records, and a right to privacy of educational records.

First, the rights under FERPA apply to “educational records,” which are records that are directly related to a student and are maintained by the educational institution or by a party acting on the institution’s behalf.  Educational records, however, do not include, for example, personal notes, records of law enforcement units (i.e., campus police), or employment records of students who may also be employees of the institution.

The right to inspect educational records initially belongs to parents.  Once a student turns 18 or attends school beyond the high school level, he or she becomes an “eligible student,” and the inspection rights transfer from the parent to the student.

Parents or eligible students have a right to review the student’s educational records and request that the school amend records that they believe are inaccurate, misleading, or in violation of the privacy rights of the student.  If the school does not make a requested amendment, the parent or eligible student has a right to a formal hearing with the school.  If the school decides not to make the requested amendment after the hearing, the parent or eligible student has the right to place a statement in the record as to why he or she believes the information is inaccurate, misleading, or in violation of the privacy rights of the student.  Any time the school discloses the disputed part of the record, it must also disclose the parent or eligible student’s statement.

Covered institutions generally cannot disclose to a third party any personally identifiable information from an educational record of a student without the parent or eligible student’s written consent.  The written consent must specify the records that the institution can disclose, the purpose of the disclosure, and to whom the institution can make the disclosure.

Covered institutions, however, can disclose personally identifiable information from an educational record without written consent to the following parties for the following reasons:

  • To school officials who have a legitimate educational interest in the information
  • To officials of another school for purposes related to a student’s enrollment in or transfer to that school
  • To certain federal officials or state and local educational authorities
  • To appropriate parties in connection with a student’s application for financial aid
  • To organizations conducting studies for a school to develop, validate, or administer predictive tests, administer student aid programs, or improve instruction
  • To accrediting organizations
  • To comply with a judicial order or lawfully issued subpoena, after making a reasonable effort to notify the parent or eligible student of the order or subpoena before disclosure, so that the parent or eligible student may seek a protective order.
  • To appropriate parties in connection with a health or safety emergency
  • To comply with laws regarding registered sex offenders
  • To parents of eligible students at postsecondary education institutions who are under 21 and commit a disciplinary violation with respect to the use or possession of alcohol or a controlled substance
  • To appropriate parties in connection with disciplinary proceedings at postsecondary education institutions, provided that the student is an alleged perpetrator of a violent crime or non-forcible sex offense and has committed a violation of the institution’s rules or policies

 

Covered institutions can also disclose “directory information” without written consent.  Directory information consists of information that would not generally be considered an invasion of privacy if disclosed, such as a student’s name, address, telephone number, e-mail address, field of study, grade level, enrollment status, dates of attendance, or degrees, honors, and awards received.  Before disclosing directory information, however, the institution must give notice to parents and attending students as to what categories of information it consideres directory information and give them a reasonable opportunity to opt out of having any or all of those categories of information designated directory information for the particular student.

The Department of Education is responsible for enforcing FERPA.  There is no private right of action, but parents or eligible students who believe an institution has violated FERPA can file a complaint with the Department of Education’s Office of the Chief Privacy Officer.  The Office will investigate the claim and issue findings.  If it determines that the institution violated FERPA, it may set forth corrective actions that the institution must take.  If the institution fails to take such corrective actions within a reasonable time, as set by the Office, the Department can withhold further payments to the institution or terminate the institution’s eligibility to receive funding under any Department program.

Tagged with: ,
Posted in Privacy

Ninth Circuit Finds Article III Standing For Procedural Violation Of Biometric Privacy Law

The Ninth Circuit Court of Appeals has written the latest chapter of the ongoing saga of Article III standing for violations of the Illinois Biometric Information Privacy Act (“BIPA”).  BIPA requires, among other things, that before collecting a person’s biometric information, a company must provide certain notices to the person and obtain a written release.  Under BIPA, companies that collect biometric information must also establish a retention schedule so that biometric information is not stored for longer than needed.

In light of the Supreme Court’s decision in Spokeo v. Robbins, however, several district courts had ruled that a bare procedural violation of BIPA, absent any additional harm, was insufficient to confer standing in federal court, because such a procedural violation does not constitute an injury-in-fact.  Yet the January 25, 2019 Illinois Supreme Court decision Rosenbach v. Six Flags Entertainment Corp. appeared to elevate a procedural violation of BIPA over the threshold of concrete harm, albeit in the context of statutory interpretation.

In Patel v. Facebook, the Ninth Circuit took a view similar to the Illinois Supreme Court, affirming a district court ruling that a procedural violation of BIPA is sufficient to confer Article III standing, because the procedural violation, in and of itself, constitutes the concrete harm necessary to satisfy the injury-in-fact requirement of Article III.

The case involves allegations regarding Facebook’s Tag Suggestions feature, which was added in 2010.  When Facebook users upload a photo, they can “tag” the people in the photo, which then links to that person’s Facebook profile.  The Tag Suggestions feature uses facial recognition technology to analyze a photo when it is uploaded and compare any faces in the photo to Facebook’s database of user face templates.  If the feature detects that the photo depicts a Facebook friend of the user who uploaded the photo, it suggests that the user tag the friend in the  photo.

The plaintiffs in the case allege violations of BIPA because Facebook collected a scan of their face geometry from uploaded photos to build its user face templates database without obtaining a written release and without establishing a BIPA-compliant retention schedule.  Facebook moved to dismiss the case for failure to meet the requirements of Article III standing, but the district court denied the motion.

On appeal, the Ninth Circuit affirmed the district court.  First, the court noted that a concrete injury does not necessarily have to be a tangible injury.  To determine whether an intangible injury is nevertheless concrete, the court considers “both history and legislative judgment.”  It also noted that the violation of a statutory right that protects against the risk of real harm may be sufficient to constitute an injury-in-fact, even absent any additional harm beyond the statutory violation.  To determine whether a statutory violation is a concrete injury, the court asks (1) whether the statutory provisions at issue were established to protect the plaintiff’s concrete interests, and if so (2) whether the specific procedural violations alleged actually harm, or present a material risk of harm to such interests.

The court answered both questions “yes” under plaintiffs’ allegations in the case.  It stated that privacy rights were well established in the common law and that technological advancements can present a real threat to those rights.  It also found the “judgment of the Illinois General Assembly” to be “instructive and important” on the matter.  It found that the Illinois General Assembly passed the procedural protections of BIPA as a means to protect a substantive right to biometric privacy.  It therefore concluded that the procedural protections in BIPA “were established to protect an individual’s ‘concrete interests’ in privacy, not merely procedural rights.”

It then turned to the question of whether the alleged procedural violations actually harmed or presented a material risk of harm to the plaintiffs’ privacy interests.  It concluded that they did.  Because the statutory right at issue is the right to retain control over one’s biometric information,  Facebook’s alleged conduct necessarily harmed that right.  In coming to its conclusion, the court cited to Rosenbach for the proposition that “when a private entity fails to adhere to the statutory procedures [in BIPA] the right of the individual to maintain his or her biometric privacy vanishes into thin air.”

Because the plaintiffs satisfied both prongs of the relevant test, the court ruled that they had sufficiently alleged a concrete injury-in-fact to satisfy the requirements of Article III standing.

Whether other circuits will rule similarly remains to be seen.  It is possible that the Supreme Court will ultimately have to resolve the issue.  But in the meantime, whether a BIPA litigant can successfully bring a claim in federal court (or whether a BIPA defendant can successfully remove a claim to federal court), may very well depend on where the claim is filed.

Tagged with: , ,
Posted in Data Security, Privacy

Year To Date Changes To State Data Breach Notification Laws

With so much attention being paid to the impending California Consumer Privacy Act, it can be easy to forget that other states have privacy and data security laws too.  And those laws change routinely, with potentially significant impacts on businesses.  Here is a quick rundown of changes to state data breach notification laws that have been enacted since the beginning of 2019.

Arkansas:  On April 10, 2019, the Arkansas General Assembly enacted amendments to the Arkansas Personal Information Protection Act.  The amendments added “biometric data,” such as fingerprints, retina scans, voiceprints, and DNA data, to the law’s definition of “personal information.”

The amendments also require personal data holders to notify the Arkansas attorney general if a data breach involves more than 1,000 individuals.  The attorney general must be notified at the same time the affected individuals are notified or within 45 days after the breached entity determines there is a reasonable likelihood of harm to customers, whichever occurs first.

The amendments also require a breached entity to retain a copy of the written determination of a data breach and supporting documentation for five years after the breach has been detected.  If the attorney general submits a written request for this documentation, the entity must provide it within 30 days.

The amendments became law on April 15, 2019 and become effective on July 23, 2019.

 

Illinois:  On May 27, 2019, the Illinois General Assembly, passed amendments to the Illinois Personal Information Protection Act with regard to data breach notifications.  Under the amended law, in addition to any obligation they may have to notify the affected individuals, data collectors are also required to notify the Illinois attorney general if a data breach involves the personal information of more than 500 Illinois residents.  Data collectors must give notice to the attorney general “in the most expedient time possible and without unreasonable delay,” but no later than when notice is given to the affected individuals.

The General Assembly sent the bill to Governor J.B. Pritzker on June 25.  He has 60 days to approve or veto the bill.  Absent action from Governor Pritzker, the bill will automatically become law upon expiration of the 60 days.     

 

Maryland:  On April 30, 2019, Governor Larry Hogan signed a bill amending the security breach notification requirements of Maryland’s Personal Information Protection Act.  The amendments expand data breach investigation requirements to businesses that maintain computerized personal data but do not own or license that data.  When there is a breach in such a situation, notification requirements fall on the business that owns or licenses the personal data.  The business that maintains the data, however, cannot charge the owner or licensee a fee for providing the information the owner or licensee needs to make the required notifications.

The new provisions become effective on October 1, 2019.

 

New Jersey:  On May 10, 2019, New Jersey enacted amendments to certain provisions of its data breach notification laws.  These amendments expand the definition of “personal information” to include a person’s first name or first initial and last name when linked with a “user name, email address, or any other account holder identifying information, in combination with any password or security question and answer that would permit access to an online account.”  If a breach includes no additional personal information as defined in the law, the entity may provide notice electronically and direct the individual whose information has been breached to “promptly change any password and security question or answer” or take “other appropriate steps to protect the online account . . . and all other online accounts for which the customer uses the same user name or email address and password or security question or answer.”  An entity that provides a customer email account, however, cannot send notification of a data breach to an email address that is subject to that data breach.

The amendments become effective on September 1, 2019.

 

Oregon: On May 24, 2019, Governor Kate Brown signed into law amendments to the Oregon Consumer Identity Theft Protection Act, which will be renamed the Oregon Consumer Information Protection Act when the amendments become effective on January 1, 2020.  The amendments make a distinction between a “covered entity” and a “vendor” that is similar to the “controller” and “processor” distinction in the GDPR.  A covered entity is an entity that “owns, licenses, maintains, stores, manages, collects, processes, acquires, or otherwise possesses personal information” in the course of its “business, vocation, occupation or volunteer activities.”  A vendor is an entity “with which a covered entity contracts to maintain, store, manage, process or otherwise access personal information for the purpose of, or in connection with, providing services to or on behalf of the covered entity.”

A vendor who discovers a breach or has reason to believe a breach has occurred must notify the covered entity “as soon as practicable but not later than 10 days” after discovering the breach or having reason to believe a breach has occurred.  The covered entity is then responsible for giving the requisite notice to the affected individuals.  The vendor must also notify the Oregon attorney general if the breach involves more than 250 Oregon residents or the vendor is unable to determine the number of Oregon residents affected.  This is in addition to any requirements the covered entity may have to notify the attorney general.

 

Texas:  On June 14, 2019, Texas Governor Greg Abbott signed into law amendments to the Texas Identity Theft Enforcement and Protection Act.  Under the prior version of the law, holders of sensitive personal data had to disclose any data breach to the individuals affected “as quickly as possible.”  The amendments change this standard to “without unreasonable delay and in each case not later than the 60th day after the date on which the person determines that the breach occurred.”  The amendments also now require notification to the Texas attorney general if a breach involves at least 250 Texas residents.  These new notification provisions go into effect on January 1, 2020.

The law also creates the Texas Privacy Protection Advisory Council “to study data privacy laws in this state, other states, and relevant foreign jurisdictions.”  The council will consist of 15 Texas residents: five state representatives and two industry representatives appointed by the speaker of the house, five senators and two industry representatives appointed by the lieutenant governor, and three industry representatives and two non-profit or academia representatives appointed by the governor.  The law tasks the council with studying privacy and data protection laws from other jurisdictions.  It must report its findings and recommendations for any changes to Texas law no later than September 1, 2020.

 

Utah:  On May 14, 2019, certain amendments to Utah’s Protection of Personal Information Act became effective.  Under the prior version of the law, notification of a data breach could be provided by publication in a newspaper of general circulation and in accordance with general legal notice requirements.  Under the new law, notice by publication is permitted only for Utah residents for whom notification by other permissible means “is not feasible.”

The amendments also lifted the cap on civil penalties for data breaches that involve 10,000 or more Utah residents and 10,000 or more residents of other states.  They also set a 10-year limitations period for administrative enforcement actions under the Act and a 5-year limitations period for civil actions under the Act, both running from “the day on which the alleged breach of system security last occurred.”

 

Washington:  On May 7, 2019, Governor Jay Inslee signed a bill amending Washington’s data breach notification law.  The amendments require any notification involving a data breach of login credentials to “inform the person whose personal information has been breached to promptly change his or her password and security question or answer, as applicable, or to take other appropriate steps to protect the online account with the person or business and all other online accounts for which the person whose personal information has been breached uses the same user name or email address and password or security question or answer.”  This notification can be sent by email, unless of course, it is the login credentials to that email account that have been breached.  Under the amendments, breach notifications are now required to include the date of the breach and the date the breach was discovered.  And the maximum time to issue breach notification is lowered from 45 days to 30 days, subject to certain exceptions.

Under the amendments, breached entities must notify the Washington attorney general within 30 days of discovery of any breach involving more than 500 Washington residents.  Along with the information previously required, this notice must now also include the types of personal information breached, the time frame of exposure, a summary of the steps taken to contain the breach, and a sample copy of the security breach notification sent to the affected individuals.

The amendments become effective on March 1, 2020

 

At least nine other state legislatures are currently considering bills that would create data breach notification obligations or modify those that are already in place.  As you can see, it is important for businesses to assess which state laws they are subject to and monitor them to stay informed as to how their legal obligations may change over time.

Tagged with: , ,
Posted in Data Breach, Data Security

Privacy Primer: Gramm-Leach-Bliley Act (GLBA)

GLBA, sometimes called the Financial Services Modernization Act of 1999, is a U.S. banking law that has important privacy and data security requirements for institutions that are subject to the law.  The law applies to “any institution the business of which is engaging in financial activities.”

GLBA’s primary purpose was to remove the barriers in the Glass-Steagall Act of 1933 and the Bank Holding Company Act that prevented organizations from functioning in any combination of a commercial bank, an investment bank, and an insurance company.  Nevertheless, concerns arose over the need to protect consumer information as institutions merged these traditionally separate functions, thereby aggregating massive amounts of customer data.  Therefore, GLBA provided for a Safeguards Rule and a Privacy Rule to help protect customer data.

First, the Safeguards Rule requires financial institutions to put in place administrative, technical, and physical safeguards to protect personal information.  This rule requires financial institutions to develop a comprehensive, written information security program that is appropriate for the size and scope of the institution and the sensitivity of the personal information at issue.  Institutions must specifically designate an employee or employees to coordinate this program.  The information security program must identify risks to the security, confidentiality, and integrity of personal information and implement controls to guard against those risks.  The rule also requires institutions to test and evaluate the controls they put in place and appropriately modify their information security program in light of the results.

Next, the Privacy Rule requires financial institutions to provide certain notices with regard to how they share information.  The rule distinguishes between consumers and customers.  For example, an individual who discloses nonpublic personal information on a loan application is a consumer of the institution under GLBA, regardless of whether the institution ultimately approves the loan.  If the institution approves the loan and extends the requested credit, thereby establishing an ongoing relationship with the individual, the individual becomes a customer of the institution.

Under the Privacy Rule, financial institutions must provide “clear and conspicuous” notice of their privacy policies in several situations.  They must provide notice to a consumer before they share any nonpublic personal information about that consumer to an unaffiliated third party.  They must provide notice to a customer no later than the time at which the customer relationship is established, and at least annually thereafter for as long as the customer relationship continues.

In general, these notices must describe the categories of nonpublic personal information the institution collects and shares with affiliated and nonaffiliated third parties and explain the right to opt out of certain disclosures.  With limited exceptions, an institution cannot share an individual’s nonpublic personal information with a nonaffiliated third party without providing the required notice and affording the individual a reasonable opportunity to exercise his or her opt out rights.  Additionally, if an institution revises its privacy policy to allow it to disclose nonpublic personal information that it did not disclose under the old policy, the institution must provide a new privacy notice and afford consumers a reasonable opportunity to opt out before disclosing their information.

GLBA disperses enforcement power across a number of agencies, depending on the institution at issue.  For example, the Board of Governors of the Federal Reserve System has enforcement authority over member banks of the Federal Reserve System, the Securities and Exchange Commission has enforcement authority over brokers and dealers, and the Board of the National Credit Union Administration has enforcement authority over federally insured credit unions.  The Federal Trade Commission has enforcement authority over any financial institution that is not specifically under the authority of any other agency.  State insurance regulators have enforcement authority over insurance providers domiciled in their state.  In addition, while the Consumer Financial Protection Bureau does not have explicit power to enforce the GLBA Safeguards Rule or Privacy Rule, it has used its general authority over unfair, deceptive, or abusive acts or practices to bring enforcement actions against regulated entities that fail to abide by those rules.

Tagged with: , ,
Posted in Legislation, Regulations
About Cyber Law Monitor
In the new digital world, individuals and businesses are almost entirely dependent on computer technology and electronic communications to function on a daily basis. Although the power of modern technology is a source of opportunity and inspiration—it also poses huge challenges, from protecting privacy and securing proprietary data to adhering to fast-changing statutory and regulatory requirements. The Cyber Law Monitor blog covers privacy, data security, technology, and cyber space. It tracks major legal and policy developments and provides analysis of current events.
Subscribe For Updates

cyberlawmonitor

Cozen O’Connor Blogs