data breach, privacy and cyber insurance

Last month I attended the annual New York Advisen Cyber Security Conference. The event which this year had over 900 attendees is frequented by insurance representatives, brokers, risk managers and, yes, a handful of lawyers, all of whom work in the cyber and cyber insurance space. Its probably the premier conference of its kind.

It’s funny. I’ve been attending this conference for about 5 years. When I first started coming, the conferance was all about sales, marketing and underwriting. I remember sitting at a table my first year having lunch with a handful of 20 something underwriters who were gushing over their new insurance product called cyber insurance. They trumpeted the freedoms the lack of forms provided and how they could even make up policy language as they went along. When I spoke up and said what about future claims, it was as if I was speaking in a foreign language: they were completely baffled.

Fast forward 5 years.  The theme of this year’s Conference was “Solving The Cyber Risk Equation”. The Conference was devoted primarily if not exclusively to what to do about claims and the enormous and uncertain risk cyber poses under the policies. Things have a way of evolving along a natural course.

The good news is that the attendees and speakers have become much more sophisticated about exactly what carriers are getting into when they write cyber policies and what kinds of risk they are biting off. They are also beginning to look long and hard at the future and the problems and issues that the proliferation of data through the internet of things and data analytics may pose for all insurance- not just cyber.

The Conference kicked off with a Keynote address by Adm. Michael Rogers, Director of the National Security Agency. Director Rogers talked about his view of the cyber landscape and the threats. He made the point that cyber carriers and, for that matter, society as a whole are dealing with a highly fluid and ever changing cyber threat where past data does not correlate well with future outcomes. In short, a potential insuring nightmare. But Director Roger’s other conclusion: cyber insurance is in a unique position to strengthen cyber security practices of insureds by incentivizing sound cyber practices.

This all sounded well and good. But as the day went on, what this may actually mean practically has a potential dark side that may or may not be acceptable to society.

For example, Nicole Eagen of Darktrace, a cyber security analytics firm, noted in her panel presentation that many cyber attacks occur due to human missteps and errors. According to Eagen, we now have the ability, though, through artificial intelligence and data analytics to take a virtual look at a company’s employees, and determine from that look what is normal behavior for each of them. Analytics can then figure out when those employees aren’t acting consistent with their norm. This then could signal actions that decrease cyber risk and steps to stop improper behavior before it causes harm. That in and of itself is kind of spooky. How much do we really want our employers monitoring our every step and labeling what we do normal or abnormal?

But what happens when insurance becomes involved. Should insurance carriers who insure against cyber risk have access to this data? Should they insist on having it?  That would certainly enable a carrier to put into place protections to reduce the risk of data breach.  And what if a carrier offered a premium discount to a business that allowed this kind of access

? Most businesses would jump I think at the chance to in essence sell their employees’ information in exchange for reduced premiums and better protection.

Mark Jenkins of PaloAlto Networks, a cybersecurity protection and detection firm, kicked off the next panel and raised the possibility that a carrier could use data collected from IoT devices to make real time algorithmic decisions about coverages and premiums. Jenkins laid out a hypothetical where data collected by your auto and communicated to your carrier would signal if you were driving too fast too long. Or the auto data could show you frequented areas of town where the risks of accidents or theft were high. Your carrier could then use this data to make real time adjustments to your premium based on your behavior. Mark thinks this not that far off.

These ideas did not go without push back however. In the same panel, Adam Cottini of Arthur Gallagher, an insurance brokerage firm, pushed back and asked is this something we really want? Do we as a society and as insuring consumers really want to give up our privacy and allow our carrier to know where we go and how fast for this kind of monitoring and decision making? He thought many would not. And later in the day, Shiraz Saeed of Starr Insurance raised the same issue: “you have to have some level of anonymity in this country. Our Constitution guarantees it. We have all made mistakes in life we would just as soon not everyone know about.”


“You have to have some level of anonymity in this country. Our Constitution guarantees it. We have all made mistakes in life we would just as soon not everyone know about.”


So the day quickly morphed from a cyber insurance conference to, in part, a philosophical debate about whether insurance companies should have and use analytical tools to reduce risk and premium or whether that so far invades basic human privacy to be unacceptable. And it raises related questions of who owns data, what is it value and who controls it. I recently interviewed Kathleen Dooley, GC of, a company whose mission is to use the block chain to enable consumers to make decisions about their health data such as who can access it and to even sell it if the so desire. Dooley made the point that personal health data is property and we have certain inherent rights with respect to it.

I actually predicted many of these very issues in a post from 2016. In particular, I was interested in how business and insurance carriers could use data to reduce costs to all. I focused, for example, on a new electric toothbrush that would communicate to your dentist how often you brushed your teeth. I postulated that this same information could be communicated to your health care provider who could in real time change your premium. Would that change behavior? It would mine. Do I really want Delta Dental to know this? Depends I suppose on what its worth-how much would I save? Implicit in my analysis is that my tooth brushing data does have value-I can trade it for reduced premium.


My guess is that people and businesses will elect to buy cheaper insurance if all it means is giving up some level of privacy. Indeed, we make this election every day


My guess is that people and businesses will elect to buy cheaper insurance if all it means is giving up some level of privacy. Indeed, we make this election every day. (How many of us trade privacy for the communication abilities offered by Facebook in order to get that free service?). Meaning that we will make choices based on the implicit monetary value of our data. Looking at how we have willingly, consistently and even happily given up privacy rights for convenience and cost, its inevitable this is where we are headed.

But this isn’t all bad. If my insurance company monitoring my teeth brushing habits means I will brush my teeth more, have less cavities and be more healthy in the end so much the better. The fact that insurance pays for filling my cavities doesn’t make going through the process any less an ordeal.



I listen to a lot of podcasts when, as in the past, I was driving to and from my law office: now, since my office is just a couple of steps down the hall from the kitchen and I have no commute, I listen while I am exercising. Some of my usual podcasts are good and some average but every now and then, I get one that really makes me see things in a new light and inspires me to do something. Such was the case with the most recent podcast of Dennis Kennedy and Tom Miguel on the Law Technology Today blog entitled Disruptive Innovation in a Law Practice.

Continue Reading Forget Disrupting Law. Disrupt Yourself

It’s fascinating to me how something designed to do one thing ends up solving an unrelated problem. Its well known that technology developed for one purpose frequently and ultimately serves different and altogether unexpected purposes and benefits: text-to-voice services come immediately to mind. These technologies were developed with those who are partially sighted in mind, but now have far broader applications, such as voice recognition technology like Siri and Amazon. Continue Reading CLOC, A2J and Mediation For All

I was looking forward to today at CES. The Acting Commish of the FTC, Maureen Ohlhausen, was to be interviewed by the CEO of CTA, Gary Shapiro. Three of the FCC Commissioners of the FCC, Brendan Carr, Mignon Clyburn and Michael O’Rielly were scheduled to participate in a roundtable moderated by Julie Kearney, VP of regulatory affairs for CTA.  Ajit Pai was supposed to be here but was a no show.

CTA is the largest consumer tech association. CES is one of the most well attended tech conference in the world. What’s the hottest topic in tech these days: the overturning of net neutrality. I thought at least we will get some insight on the pros and cons of this issue by those most directly involved in the decision. Wrong. Instead we got abbreviated wave to and acknowledgment of the issue and a recitation of slogans with little real explanation. Continue Reading Net Neutrality: CES Discussion Disappoints

Earlier this month, the 9th Circuit dealt online anonymous reviewing services a chilling blow when it decided United States v. Glassdoor. Faced with an online service which allowed people to post employer reviews for the benefit of others, the Court determined that those who posted on the service were like newspaper reporters and reverted to an analysis used for print media some 40 years ago.

Specifically, the Court ruled that the government could compel Glassdoor to reveal the identity of anonymous reviewers of employers by employees who posted on the site even if those who had posted didn’t consent. What this means for other online services that rely on similar anonymous posts could be significant. At the very least, use of outmoded legal concepts for new technological driven will be chilling and is unfortunate.

This means the government could compel Glassdoor to reveal the identity of anonymous reviewers of employers by employees who posted on the site even if those who had posted didn’t consent.


The case started when the government served a subpoena on Glassdoor, an online forum where current and former employees can anonymously post reviews about the salaries and work environments of their places of employment. The subpoena asked for identifying information for more than one hundred accounts that had posted reviews of an employer whose contracting practices were apparently under criminal investigation by a federal grand jury. The investigation centered on alleged wire fraud by one of the companies that was under investigation by a Grand Jury.
Glassdoor refused to reveal its users’ identity to the Grand Jury citing among other things the First Amendment right of its users to speak anonymously. Continue Reading When Anonymous Isn’t Really: Government Threats to Online Reviews

Fear of new technology sometimes creates strange legislative results and perhaps unintended consequences.

In 2008, Illinois passed the Biometric Information Privacy Act (BIPA), designed to protect employees and consumers against perceived abuses associated with the collection of bio metric data by businesses and providing a statutory cause of action for its violation.

Fearing how such technology might be used and worried about the privacy implications of how data might be used, (the Preamble actually provides “the full ramifications of biometric technology are not fully known”), the Illinois Legislature enacted a law requiring businesses planning to maintain any sort of databases of these identifiers enact policies to receive written authorization from customers or employees before scanning fingerprints, retinas or other biometric identifiers. It also requires businesses to share with those whose biometrics are being scanned information on how those identifiers would be stored and even disposed of. Continue Reading Illinois Biometric Information Privacy Act: Legitimate Privacy Protection or Pandora’s Box?

Under a new law recently proposed in Ohio, businesses that take steps to secure data could be protected from lawsuits if a hack occurs. The bill, Senate Bill 220, was the first bill to emerge from the Ohio attorney general’s office’s and its cyber-security task force of business leaders, information technology experts, and law enforcement created in the wake of high-profile hacks of consumer information. The bill is an effort to help businesses with cyber related claims, encourage them to be proactive and recognize the difficulty in creating standards for constantly evolving technologies. It’s a valid effort to balance law and technology.

According to Ohio Attorney General Mike DeWine, a member of the task force, “Those business that take reasonable precautions and meet these important standards will be afforded a safe harbor against claims should a data breach occur…To trigger the safe harbor provision, businesses must create their own cyber-security programs that meet certain standards.” Continue Reading New Ohio Bill Proposes Data Breach Protections

“No stop signs, speed limit
Nobody’s gonna slow me down
Like a wheel, gonna spin it
Nobody’s gonna mess me around”

This blog is directed toward examining the tensions that arise as technology runs square into the law and the practice of law. Often the fit between the two doesn’t exist, or isn’t great mainly because the law looks backward and cant see solutions to problems that didn’t exist before. And sometimes the opposite occurs: we try to change the law to attack a new kind of problem and in doing so create a whole host of unintended consequences.

One glaring example is the ACDC Act which was proposed earlier this year by Rep. Tom Graves of Georgia. Continue Reading Highway to Hell: The ACDC Act

Smart Home Exhibit @ The Museum of Science and Industry

We have all heard about smart homes and the nirvana they may create. But we hear little about the risks, exposure and liability smart homes may pose. These risks stem from the fact that the standards governing smart home devices and the Internet of Things (IoT) simply don’t yet exist. And to the extent any do, they are not necessarily consistent and the law is not well developed. Nor has it addressed many of the issues raised by the new technologies. So we have a bunch of new devices that are popular, that carry some risks with few standards or laws governing them. Sound like a recipe for litigation? Continue Reading Smart Homes: Risky Business?