Cybersecurity Updates Newsletter, Winter 2015

February 9, 2015 Advisory

We are pleased to share with you the Wiggin and Dana Cybersecurity and Privacy Practice Group Newsletter. We circulate this newsletter periodically by e-mail to bring to the attention of our colleagues the latest updates in the areas of cybersecurity and privacy, with reports on recent developments, cases and legislative/regulatory actions of interest, as well as happenings at Wiggin and Dana. We welcome your comments and questions.

Save the Date: 2015 Connecticut Privacy Forum

On April 23, 2015, Wiggin and Dana will be hosting the 2015 Connecticut Privacy Forum from 8:00 a.m. - 4:00 p.m. at the Omni New Haven Hotel at Yale. The morning session of this year's conference will focus on how organizations are managing increased cybersecurity and cybercrime risks after Target, SONY and Anthem; the emerging data privacy risks for businesses as they begin to embrace ‘big data' and the ‘Internet of Things'; and cases and issues to watch in 2015.

New This Year: The Connecticut Privacy Forum Workshop - a post-Forum program for in-house counsel, privacy officers, and risk managers focusing on industry-specific privacy and security requirements. The program will also include a discussion of ‘in the trenches' compliance and risk-mitigation issues.

Speakers will include a keynote from government, and leading policymakers, regulators, law enforcement officials, consultants and legal counsel. We are currently working on finalizing topics and a formal invitation will follow soon.

Independent Contractors, Outsourcing Providers and Supply Chain Vendors: The Weakest Link in Cybersecurity?

By John Kennedy and Patrick LaMondia, Wiggin and Dana LLP

Corporate executives who tell consumers that "our vendor was hacked and we lost your data" are increasingly likely to get about as much sympathy as the student who pleads "the dog ate my homework."

The Target Corporation's cybersecurity woes began when hackers executed a successful phishing attack on one of Target's vendors, an HVAC company based in Sharpsburg, Pennsylvania. (The vendor serviced refrigerators at Target locations.) The attack yielded the vendor's access credentials for a web portal used by Target to communicate with its vendors and contractors. Once inside this portal, the hackers were able to penetrate Target's internal network and successfully pull off what remains one of the largest data breaches ever reported (personal and financial information of up to 110 million individuals stolen in late 2013).

The Target incident demonstrates how third party vendors can become unwitting ‘fifth columns' for cybercriminals who are going after larger corporate and government fish. Even as businesses struggle to shore up the technical security of their networks, computers, portable devices and software, many overlook a major ‘back door' source of cyber risk: the third party contractors, suppliers, outsourcers, cloud services and other vendors who have been given direct or indirect access to sensitive company systems and data.

Any comprehensive approach to cybersecurity should take into account risks related to the organization's vendor relationships and procurement practices. Basic measures include a systematic approach to (1) due diligence on external vendors, (2) contract terms addressing security and continuing vendor oversight, and (3) integrating procurement practices into the company's overall security program.

  • Cybersecurity and privacy due diligence

Businesses should conduct reasonable due diligence on third party vendors who may present cybersecurity or privacy risk to the company. Indeed, in regulated sectors, such as healthcare and financial services, such diligence is required by law. The type and extent of the diligence will vary depending on the type of third party product or service involved and the sensitivity and legal requirements attaching to the information assets involved. But at a minimum, cyber due diligence should include assessing a vendor's:

    • Written privacy and cybersecurity policies, including data incident response, disaster recovery and business continuity policies;
    • Adoption of and adherence to recognized industry security standards and frameworks for information security (e.g., ISO standards, ISAE, PCI DSS and regulatory audit protocols, where applicable, under HIPAA or GLBA);
    • Results from recent cyber risk assessments, audits and testing of its systems, controls and threat environment;
    • Governance program for managing cybersecurity and privacy risk (including key management personnel and processes for adapting to new cyber threats);
    • Management of insider security risks through employment practices and training;
    • History of any security or privacy-related incidents (whether or not reported publicly), including actual or threatened unauthorized access to systems or data, claims, complaints, litigation, investigations or other legal or regulatory proceedings; and
    • Use of subcontractors who will have access to sensitive data or systems.

Cybersecurity due diligence should typically involve the company's subject matter experts on information technology and enterprise security. Where warranted by the perceived risks, such diligence will frequently include interviews with key personnel, customer reference checks, site visits, facilities inspections and system testing.

  • Contract Terms and Vendor Oversight

Security-conscious businesses are demanding tighter security assurances and commitments from their outside vendors in the post-Target world. Regulators, particularly in healthcare and financial services, require increasingly detailed security terms in vendor contracts. Where vendor relationships involve access to customer data and/or systems access, a customer's exclusive reliance on conventional confidentiality clauses in a service contract may provide inadequate protection.

Commonly-included privacy and security-related terms in outsourced data services contracts, for example, include:

    • Representations, warranties and covenants directed to minimum or baseline information security measures (such as encryption) of the vendor and/or specific security standards that will apply;
    • Detailed terms for the handling of security incidents and reportable data breaches, as well as related liability provisions;
    • Clear definitions of data ownership, use rights and restrictions (including permitted uses of de-identified or anonymized data);
    • Requirements for personnel and subcontractors who have data or system access;
    • The parties' respective obligations regarding compliance with privacy and data security laws;
    • Customer rights regarding auditing, inspection and monitoring of the vendor's performance of security-related functions;
    • Special indemnification and liability limitation terms related to privacy and security claims.

Businesses should consider having a standard set of privacy and security clauses for use with vendors and other third party service providers. Where a vendor resists such terms or insists on its own, less customer-friendly contract paper (a common situation in the public cloud services market), businesses need to determine whether the privacy and security risks of proceeding with the vendor are acceptable from a total risk management perspective.

  • Cyber-aware procurement practices

Beyond conducting appropriate due diligence and negotiating improved vendor contracts, businesses may also want to reassess their procurement practices from a cybersecurity perspective. How risky (or outdated) is the company's current portfolio of vendor and supplier contracts in light of today's cyber threat environment? When was the last time vendor relationships were reviewed from a cyber-risk management standpoint? Is there a process in place for reviewing and amending vendor contracts coming up for renewal? Is there a common set of security requirements and standards applicable to all vendors, independent contractors and contract bids? Is the company's acquisition of software, apps, devices and cloud services by company personnel centralized and subject to compliance with the company security policies (e.g., Bring Your Own Device policies)?

Addressing these and related questions will not guarantee cybersecurity for the corporate enterprise, but it can certainly tighten the latches on a frequently overlooked ‘back door' risk of hacking and cyber-theft.

The FTC's Privacy Agenda for the Jetsons: Takeaways from the January 2015 Staff Report on The Internet of Things

By John Kennedy, Wiggin and Dana LLP

Last month the Federal Trade Commission ("FTC") released its long-awaited report on how consumer privacy and data security can co-exist with the ‘Internet of Things' ("IoT") (the "Report"). To the FTC, IoT means "devices or sensors—other than computers, smartphones, or tablets – that connect, communicate or transmit information with or between each other through the Internet." ‘Smart' homes, cars, glasses, health bracelets and other yet-to-be imagined gadgets connected to the Internet – the "things" of IoT – are the next phase in the evolution of the Internet. U.S. consumers are moving toward an automated household utopia that might stun even the Jetson family of 60's TV fame. And all of these devices will collect and transmit data about their owners' lives, habits and peculiarities. This prospect motivated the FTC to convene a public workshop on IoT last year. The follow-up Report signals to industry how the FTC intends to regulate consumer privacy risks unique to IoT.

The FTC's Concerns About Data Security and IoT

The Report welcomes the expected benefits of innovation in the booming IoT market. Connected health monitoring devices, driverless cars, smart metering of utilities and other ‘home automation systems' all promise great leaps in consumer health, safety and convenience. At this nascent stage of the IoT industry, the Report signals to industry that the time for IoT-specific legislation and regulations is not yet here. Innovation could be stifled. But that does not mean the FTC is giving the IoT industry a free pass.

Exponential jumps in the number of Internet-connected sensors monitoring every crevice of consumers' lives and homes will create undeniable privacy and security risks. One estimate cited in the Report counts 25 billion connected devices as of 2015, a number that is expected to reach 50 billion by 2020. The amount of consumer data collected by this many connected data sensors will be measured in ‘exabytes' (one Exabyte is equivalent to 50,000 years' of stored DVD video data).

The Report notes that some security risks of the IoT are similar to those that already affect personal and mobile computing devices, such as data breaches (although the exposed data from the IoT may include not only credit card numbers but when the consumer showers and at what water temperature, or whether the home heat has been set low, indicating the residents are away in Florida). Similarly, hacks that plant malware in connected devices could convert them into launching pads for other forms of Internet mischief, such as distributed denial of service attacks and spamming bot networks (or ‘botnets').

But IoT security risks go beyond harm to data and systems. They may pose direct threats to tangible property and the human body. Hacked IoT devices in cars could trigger loss of control of a moving vehicle. Hacks of implanted medical devices could compromise the monitoring of vital body systems or medication dosages. Voyeurs hacking into home web cams could turn intimate moments into public performances. (This last scenario was the basis in 2014 of the FTC's first IoT enforcement action.) A further FTC concern is that the emerging IoT marketplace includes technology newcomers that either don't understand Internet security risks or will put security on the backburner while rushing devices to market. The Report is a polite but firm warning shot to these industry participants.

Privacy Notices and User Consent: Not Dead Yet

Security isn't the FTC's only concern. The IoT market also promises to be another privacy free-for-all comparable to the early days of mobile apps.

Fair Information Practice Principles (FIPPS) (e.g., consumer notice, consent, opt-out and opt-in rights) have long been under assault as unworkable in the era of web and mobile-based commerce. Industry witnesses at last year's IoT hearings reminded the FTC that online privacy notices are ineffective and largely ignored by consumers. Many IoT devices, moreover, will have little or no user interfaces for displaying privacy notices, making it even more challenging to deliver traditional privacy statements and opt-out notices to consumers in any meaningful way.

The Report also recognizes that the IoT challenges the ‘data minimization' principle of the FIPPS. Industry asks why it should limit its collection of consumer data only to what's needed to provide a current IoT service when additional consumer data eventually may yield more insights on what consumers really want. The better approach, say industry commentators, is to focus on self-regulatory codes that set standards for appropriate and accountable uses of consumer data.

The FTC staff sympathizes with industry's frustration with the old notice and consent model, but partially rejects the so-called ‘use-based' model for protecting consumer privacy. The Report defends the principles of data minimization, notice and consumer choice about data collection. It encourages IoT companies either to (i) limit sensitive data collection, (ii) de-identity the data they collect (and keep it de-identified), or (iii) obtain consumer consent for data collection that consumers would find ‘unexpected' in the context of using the product or service being offered. As for the challenges to providing ‘notice and consent' on screen-less IoT devices, the Report urges industry to get creative and offers at least nine different approaches to how this might be done.

FTC Recommendations to IOT Market Participants

The Report includes guidance to industry on how best to address security and privacy risks when entering the IoT market (and perhaps avoid possible FTC inquiries):

Security: Device makers and software developers should:

  • Practice privacy and security ‘by design' by integrating controls and privacy features into products from the initial design phase, through pre-launch testing and into post-sale maintenance and software patching;
  • Conduct risk assessments that are relevant to the particular risk environment of the device and the sensitivity of data to be collected;
  • Pay particular attention to privacy favorable defaults in device operation, data encryption, access controls and secure user authentication of devices;
  • Create security roles in the organization and train personnel (including engineers and programs) in security awareness;
  • Vet third party providers (e.g., software developers, hardware manufacturers, hosting providers) for their security practices and monitor their services for compliance.

Privacy: Privacy measures should be designed into products and services, including:

  • Limiting the collection of personal data to that which consumers would reasonably expect is needed in order to use the product or service, or de-identifying additional or sensitive consumer data;
  • Using innovative techniques to provide clear and prominent notice to their IoT consumers when data collection exceeds what is reasonably required to operate a product or receive a service.
  • Following the issuance of the Report, makers and would-be makers of IoT devices should consider themselves warned: the FTC will be looking out for the privacy of the Jetsons.

Prepare Now for the Second Wave of HIPAA Audits

By Michelle DeBarge, Jody Erdfarb, Wiggin and Dana LLP

The HITECH Act of 2009 for the first time required the United States Department of Health and Human Services (HHS) to periodically audit covered entities and business associates to ensure HIPAA compliance. Since then, the HHS Office for Civil Rights (OCR) has performed an initial round of audits of covered entities, and it will be conducting a second round of audits covering both covered entities and business associates very soon.

Phase 1

The first phase of HIPAA audits, labeled by OCR as the "pilot audit program," involved 115 covered entities. OCR representatives have unofficially discussed the results of the audits at various conferences and noted some interesting findings:

  • 11% of the audited covered entities (two health care providers, nine health plans, and two health care clearinghouses) had no findings or observations.
  • 60% of the findings involved the Security Rule, 30% involved the Privacy Rule, and 10% involved the Breach Notification Rule.
  • 58 of 59 providers for which there were findings had at least one finding or observation involving the Security Rule. In addition, two-thirds did not have a complete and accurate Security Rule risk assessment, including 47 out of the 59 providers, 20 out of the 25 health plans, and 2 out of the 7 clearinghouses.
  • The most common cause identified for the findings and observations was that the entity was unaware of the requirement. Other causes included lack of sufficient resources, incomplete implementation, and complete disregard of the requirement.
  • 59% of covered entities that responded to a post-audit survey reported that they were not aware of the pilot audit program prior to being selected. Most of these entities were also not aware that the audit protocol was available on the OCR website.

Phase 2

OCR has not officially released any information about Phase 2 of the HIPAA audits. However, on February 24, 2014, OCR published a notice in the Federal Register seeking comments on its plan to survey 1200 covered entities and business associates in order to generate a pool of organizations to target in its second round of nationwide HIPAA compliance audits.

Also, OCR representatives have informally provided information about the new wave of audits. Some of this information has been inconsistent - perhaps due to OCR changing its plans over time. Generally, however, OCR has confirmed that it will be auditing both covered entities and business associates soon. Most recently, Jocelyn Samuels, Director of OCR, told reporters that although the audits were supposed to begin in late 2014, they have not started yet. Samuels said that her office is still deciding how the investigations will be conducted and "will be making announcements about the program in the weeks and months to come." Some reports attribute the delay to a re-tooling of the OCR audit web portal to better enable audited covered entities and business associates to upload documentation.

Indications are that between 300-350 covered entities and business associates will be targeted. Covered entities will be asked to identify their business associates, from which OCR will select the business associates to be audited. OCR apparently will not use consultants to perform the audits as it did in the pilot audit program. Initially, OCR suggested that the next round of audits will be desk audits, where submitted documentation is reviewed, but now there is some indication that the audits will be a combination of both desk audits and comprehensive on-site audits. OCR representatives have stated that entities selected for desk audits will have only two weeks to provide all of the requested documentation and those auditors will not contact the entity for clarifications or to ask for additional information. There have been varying reports regarding the focus of the second phase of audits. Topics have included risk analysis and risk management, notice of privacy practices, access rights, content and timeliness of breach notification, device and media controls, transmission security, training, and encryption.


There are several lessons to be learned from the information that OCR has shared.

Be prepared: Covered entities and business associates should ensure that their HIPAA documentation is readily available, up-to-date, accurate, and in full compliance with all applicable requirements. Required documentation includes policies and procedures, business associate agreements, risk assessments, breach logs, and employee training. Since OCR will be conducting site visits, covered entities and business associates should ensure that all written policies and procedures are fully implemented as drafted. A HIPAA compliance program that is robust on paper, but that is not fully operational "on the ground," will not suffice. All employees should understand the rules applicable to their job responsibilities, be able to identify the privacy and security officers, and know how to report suspected problems. Covered entities and business associates may want to assign a team to handle audit readiness and conduct an internal HIPAA audit to identify and correct any issues now.

Do Not Forget the Risk Assessment: OCR has shown more and more interest in ensuring that covered entities and business associates perform the risk assessment required by HIPAA's Security Rule, and update that assessment over time to account for technological and operational changes. Failure to provide documentation of this risk assessment will be an immediate red flag for regulators.

Monitor the OCR Website: Covered entities and business associates should closely monitor the OCR website for audit-related announcements. Specifically, be on the look-out for an updated audit protocol. Becoming familiar with OCR's website is important because OCR provides a number of important and useful educational resources to assist them in complying, including a security risk assessment tool and a mobile device security resource kit.

Expect More Auditing and Enforcement: Despite its slow start, OCR will roll-out Phase 2 HIPAA audits soon and implement additional auditing programs in the future. OCR takes HIPAA compliance very seriously and is likely to implement enforcement action in response to a covered entity's or business associate's disregard of its HIPAA compliance obligations. HIPAA enforcement from both federal and state agencies has risen exponentially, with enforcement actions becoming more frequent and settlement amounts climbing ever higher. Take advantage of the delayed start to the Phase 2 audits and ensure that you are well prepared before OCR comes knocking.

Does Your Company Have a Policy Protecting Employees' Social Security Numbers and Personal Information? (Hint: It Should).

By: John Kennedy and Joshua Walls, Wiggin and Dana LLP

If one of your company's New Year's resolutions is to overhaul your employee handbook, you are not alone. Employers routinely ring in the New Year by conducting handbook audits designed to ensure their policies and procedures line up with the most recent legal developments. Given the volume of employment regulations out there, your company also wouldn't be the first to overlook some of the lesser known employment laws during this process. With cybercrime and data breaches becoming daily headlines, Connecticut employers should make sure their handbooks address section 42-471 of the Connecticut general Statutes, first enacted in 2008, requiring companies to protect employees' Social Security numbers and personal information.

Think about it: from tax and employment eligibility forms to payroll and benefits paperwork, you undoubtedly collect and store your employees' Social Security numbers. The same can be said for "personal information," which the statute broadly defines as "information capable of being associated with a particular individual," including a driver's license number, account number, credit or debit card number, passport number, alien registration number, or health insurance identification number. Should your company fall victim to a data breach, these hallmarks of identity theft would be among your most desirable assets.

According to section 42-471(a), companies possessing personal information—pretty much every company in Connecticut—must "safeguard the data, computer files and documents containing the information from misuse by third parties, and shall destroy, erase or make unreadable such data, computer files and documents prior to disposal." The statute does not spell out precisely how the information should be "safeguarded," but storing it in locked file cabinets and/or password protected computer files is an obvious way to start. If your company has a chief information officer or chief information security officer, you should confirm the company is using appropriate practices for securing personal information, including Social Security numbers, throughout the entire data lifecycle (i.e., from acquisition through disposal). Ultimately, the further you distance such data from the company's everyday operations, the safer it will be.

Companies must go a step further with Social Security numbers, however. Under section 42-471(b), employers are required to "create a privacy protection policy which shall be published or publicly displayed." The policy must: "(1) protect the confidentiality of Social Security numbers, (2) prohibit unlawful disclosure of Social Security numbers, and (3) limit access to Social Security numbers." When preparing your company's privacy protection policy, it would behoove you to consider section 42-470, a related statutory provision, which forbids the posting, public display, or use of Social Security numbers to access products or services, as well as the transmission of Social Security numbers over the internet "unless the connection is secure or the Social Security number is encrypted," and the use of Social Security numbers, on their own, to access websites. If your company is not following these basic rules, the time to start is now.

By no means must your "privacy protection policy" explain, in excessive detail, what your company intends to do to protect Social Security numbers, especially since the mechanisms used to safeguard this information may change from time to time. Plus, you do not want to set your company up for a possible legal claim should your actual practice deviate from what is set forth in your policy. A simple statement expressing the seriousness with which your company takes its obligations under section 42-471, and that you will protect the confidentiality of, prohibit the improper disclosure of, and limit access to employee Social Security numbers "as required by law" is sufficient. The key, of course, is to then do what your policy says. Training those employees who handle such information is the best and easiest way to start.

While not specifically required by the statute, there is no harm to including "personal information" within the scope of your privacy protection policy. After all, the penalties for failing to safeguard "personal information" are the same as those for failing to implement, publish (i.e., in an employee handbook) or post (i.e., on an office bulletin board) a privacy protection policy addressing Social Security numbers, namely a civil penalty of $500 per violation, with a cap of $500,000 per event.

If, like most employers, another one of your New Year's resolutions is to minimize risk and expenses, compliance with section 42-471 is an easy way to start off on the right foot.

Washington Update

By Michael McGinley, Wiggin and Dana LLP

A string of highly publicized cybersecurity incidents at American businesses in 2014 shook consumer confidence and cost industry hundreds of millions of dollars. The federal government, under significant pressure to address the problem, recently outlined several proposals to address cybersecurity. Here are some of the latest proposals and events we are following:

SEC and FINRA Cybersecurity Reports

On February 3, 2015, the SEC released a risk alert documenting its findings from its 2014 sweep of over 100 broker-dealers and investment advisers. During the sweep, the SEC examined firms' abilities to identify cyber risk and to implement mitigating policies and procedures, to prevent and detect intrusion, and the effectiveness of firms' responses when intrusions occurred. The risk alert, authored by the Commission's Office of Compliance Inspections and Examinations ("OCIE"), found that a large majority of broker-dealers and advisers have written information security policies, but that most policies fail to discuss liability with respect to client losses associated with cyber incidents. The Commission also reported that a large majority of the firms examined (88% of broker-dealers and 74% of advisers) had been involved in a cyber incident.

Surprisingly, the SEC reported that a large percentage of firms did not incorporate cybersecurity risk mitigation or training requirements into their vendor contracts and that only 58% of broker-dealers and 17% of advisers maintained cybersecurity insurance. Nevertheless, the Commission reported that OCIE "will continue to focus on cybersecurity based on risk-based examinations" in 2015, so firms should evaluate their cyber preparedness against the report's findings and be prepared for an assessment of their cybersecurity controls.

On the same day that the SEC released its report, FINRA released a report highlighting cybersecurity best practices based on the cybersecurity "sweep" examinations it conducted on broker-dealers (firms) in 2014. FINRA concluded that the top three threats facing firms are (1) hacker penetration, (2) insiders compromising firm or client data, and (3) operational risks. FINRA's report recommends firms institute cybersecurity programs "grounded in risk management" in response to these threats. FINRA specifically recommended firms:

  • Establish a strong governance framework to include senior-executive involvement on cybersecurity issues
  • Conduct regular cybersecurity risk assessments
  • Consider implementing a defense-in-depth technical control strategy to include compartmentalizing sensitive data where possible
  • Implement and test incident response plans and run periodic table-top exercises to test the effectiveness of their plans
  • Exercise robust due diligence on vendors throughout the lifecycle of the relationship
  • Train staff to reduce the effectiveness of social engineering attacks and to minimize unintentionally risky behavior
  • Establish procedures to enable collection and use of cyber threat intelligence from sources like the Financial Services Information Sharing and Analysis Center ("FS-ISAC") and the FBI

Cyber Summit to Feature Top CEOS, President Obama

On February 13, 2015, President Obama will deliver the keynote address at the White House Cybersecurity Summit. According to White House Cybersecurity Coordinator Michael Daniel, "[t]he Summit will bring together major stakeholders on cybersecurity and consumer financial protection issues— including senior leaders from the White House and across the federal government; CEOs from a wide range of industries, including the financial services industry; technology and communications companies; computer security companies; and the retail industry, as well as law enforcement officials; consumer advocates; technical experts; and students." The agenda will include a discussion on how to improve public-private partnerships, cybersecurity best practices and technologies, and secure payment technologies. Furthermore, President Obama may use the Summit to announce new executive action designed to facilitate information sharing between industry and the government.

FTC Rising

The White House is slated to unveil proposed privacy legislation in February that will strengthen the Federal Trade Commission's ("FTC") cybersecurity authority. While the draft proposal has not been published, several sources suggest it will give the FTC a significant new power—the authority to fine companies for cyber-related violations. According to Politico, fines could reach $16,500 per violation, per day—meaning that a company could amass millions of dollars in penalties very quickly. The proposed legislation also would levy new requirements on companies collecting user data.

The new proposal is unlikely to become law in the current Congress, but it signals a significant new push by the White House to gain the upper hand in a cyber environment increasingly viewed as hostile to the American consumer. Furthermore, the proposal reinforces the FTC's role as the Administration's privacy enforcer.

Federal Data Breach Notification Initiatives

At a national level, the data breach notification process is a mess. Because there is no single federal data breach notification standard, companies engaging in interstate business activity must be prepared to navigate approximately 50 different state-level data breach notification laws, many of which require reporting to multiple state agencies within a single state. Worse, these state laws often have requirements that conflict with one another. This makes data-breach response complicated and expensive for companies and often delays timely reporting to the detriment of consumers. Things might be about to change.

The White House and the Republican-led Congress have pledged to take action in 2015, and the Pentagon already has drafted a proposed new rule for its defense contractors.

In a January 12, 2015 speech before the FTC, President Obama observed that the current patchwork of state laws is "confusing for consumers and it's confusing for companies—and it's costly." Obama then unveiled his proposal to address the solution through federal law—the Personal Data Protection and Notification Act. Under the proposed Act, once a company discovers a data breach involving consumer personal information, the company has a 30-day window to notify affected individuals.

For years, congressional efforts to create a national data breach notification law have failed to gain bi-partisan traction, but congressional leaders believe this year will be different, whether or not the result is based on Obama's proposal. Given the recent string of high-profile data breach incidents, the timing for congressional action seems right. On January 28, 2015, Senate Commerce Committee Chairman John Thune (R-SD) promised to develop data-breach notification legislation, noting that the 114th Congress will "seek to tackle the data breach notification issues that have hamstrung Congress for far too long." To be successful, however, this Congress must find a way to bridge several competing efforts, including those led by Senator Bill Nelson (D-FL), and Representatives Michael Burgess (R-TX), Marsha Blackburn (R-TN), and Peter Welch (D-VT).

While 2015 will require a "wait and see" approach for many companies as several data breach notification proposals work their way through Congress, companies operating in the defense sector should expect to comply later this year with a new data breach notification requirement.

The 2013 National Defense Authorization Act required the Department of Defense to establish procedures requiring "each cleared defense contractor to rapidly report" to the DoD penetrations of its network or information systems and to provide access for DoD investigation of those incidents. After more than two years of delay, on January 21, 2015, the Pentagon submitted an interim Defense Federal Acquisition Regulation Supplement ("DFARS") rule on the subject to the Defense Acquisition Regulations Council.

While any final rule is months away and subject to several iterations of review and comment, data breach notification in the defense community is a hot-button issue following news that defense contractors were not telling the Pentagon that they were being hacked regularly by the Chinese. Accordingly, defense contractors should have notification systems in place now in preparation for the final rule.

Insurance Corner

By Michael Menapace, Wiggin and Dana LLP

Of all the insurance coverage cases from the past year that concern data breaches, the one that has garnered the most publicity is the Zurich v. Sony case in New York state court. Readers will recall that hackers breached the Sony PlayStation system and acquired personal data about consumers who used the online gaming platform. Sony tendered a claim to Zurich and other insurers for losses associated with the hack. The insurers, in turn, brought litigation seeking a judicial decision on whether there was coverage under Sony's commercial liability policies. Coverage A of the standard CGL covers bodily injury and physical damage to tangible property.

Consequently, Sony was not seeking coverage under that section. Instead, Sony was seeking coverage under Coverage B, which is for personal and advertising injury liability. The trial court held that there was no coverage for the hacking incident under the CGL policy because there was no "publication" to trigger Coverage B. "Publication" requires some affirmative action by the policyholder and not a third-party. Because Sony's system was hacked, the court reasoned, it did not transmit, or "publish," the information and the policy was not triggered. The Sony decision is on appeal, but it is consistent with the majority of courts who have held that most data breaches are not covered under the standard CGL policy. There should be fewer coverage disputes over this issue in the coming years as most insurers are now issuing express exclusions in CGL policies and offering optional cyber coverages within E&O platforms. Companies should not count on CGL policies to provide coverage for breaches and should consider whether specialized cyber coverage is appropriate to include as part of their risk management strategy.

The other type of coverage claim that is being considered across the country is whether CGL policies cover alleged violations of the Telephone Consumer Protection Act. A large number of cases have been brought by commercial liability insurers seeking declaratory judgments that they do not have to provide coverage for their insureds' alleged TCPA violations. There is no uniform rule yet as a result of these cases. The decisions often turn on the specifics of the policy exclusions in the policy at issue. Some commercial liability policies have exclusions specific to TCPA claims, while others may contain more general exclusions. General exclusions, understandably, are the ones that are creating the most debate. Like coverage for cyber liability, general liability policies will most likely all contained specific exclusions going forward so for the time being, courts will have to address the dispute under these legacy forms.

One of the issues at the vanguard for technology companies (and, increasingly, companies outside the technology sector) is the use of drones. The Insurance Services Office, Inc. ("ISO") is preparing policy forms specific to the use of unmanned aircraft. There will be three different endorsements that are intended to be used with CGL or commercial umbrella policies that provide liability coverage for bodily injury, property damage, and personal and advertising injury. These ISO forms, if adopted by insurers, should allow insurance purchasers to more easily compare the coverage terms offered by multiple insurers for unmanned aircraft liability. Purchasers, insurers, and brokers will be facing the insurance issues concerning the use of these vehicles and the IOS forms should be helpful.

As the FTC urges enhanced privacy protections for developers of wearable technology, insurers and insureds are taking note. Insurers have been contemplating the risks associated with companies using these technologies for some time as companies expand their use dramatically. Examples of wearable technology include headsets used by retail sales associates, high definition glasses worn by warehouse workers to help see and scan barcodes, google glass, and Apple Watch. Wearable technology can help construction workers quickly access building plans and help disabled workers complete otherwise challenging tasks. Some even predict that wearable technology will make laptop computers and tablets obsolete. The future of these devices is, of course, somewhat speculative, but insurers and insureds are considering the insurance implications on employee health and efficiency, the data collected by these devices, how the data is stored and accessed, and potential improper use of the data by employers or employees. If your company is allowing the use of this technology, or requiring it, it makes sense to discuss the use with your broker and insurer to discuss insurance coverage for potential liability. Likewise, insurers should consider asking about an insureds use of these technologies as a factor in issuing or renewing insurance programs.

We encourage you to attend this year's Connecticut Privacy Forum for a panel presentation on trends of data breaches, their costs, and insurance coverage issues.

Wiggin and Dana Data Breach Hotline: 1-844-9BREACH

Wiggin and Dana is pleased to offer a toll-free hotline dedicated to data breach and related issues. The number is 1-844-9BREACH. We routinely assist clients directly in these events and have been designated as a Data Breach Coach by insurers. Clients can call the hotline and immediately be put in touch with one of our attorneys to