Breach Notification Rule: The Basics

Before healthcare organizations can prepare to comply with the HITECH breach notification rule, they must understand its complex details. In an exclusive interview, attorney Deven McGraw sorts through the major provisions in laymen's terms.

She provides detailed guidance on:

The definition of the term "breach;"
What the rule means by "significant risk of harm;"
How hospitals, clinics, insurers and others must notify affected individuals, the media and regulators about a breach;
What the "safe harbor" for encrypted data really means; and
The role of business associates in reporting breaches.

McGraw urges organizations to document a step-by-step process for investigating and reporting breaches. And she advises hospitals and others to make extensive use of encryption "to save yourself a lot of heartache."

McGraw is director of the health privacy project at the Center for Democracy & Technology, a Washington-based, not-for-profit civil liberties organization. She focuses on developing and promoting policies that ensure the privacy of personal health information that is electronically shared.

She serves the HIT Policy Committee, a federal advisory panel to the HHS Office of the National Coordinator for Health IT, and co-chairs its information exchange and privacy/security workgroups.

HOWARD ANDERSON: This is Howard Anderson of Information Security Media Group. We are talking today with attorney Deven McGraw, director of the Health Privacy Project at the Center for Democracy & Technology. We'll discuss the details of the HITECH Act Breach Notification Rule, which requires healthcare organizations and their business associates to report information breaches. So for starters, how does the rule define the term "breach'?

DEVEN MCGRAW: That was actually a pretty broad definition of breach. It is far beyond what people customarily think of in terms of breach. Usually when you say the word "breach" people think of a security breach. Someone hacking in to a system from the outside is the common example, or somebody breaching from the inside, meaning it is an employee but they are accessing a record that they are not supposed to. But the new definition of breach that Congress enacted a little over a year ago is much broader, and is basically any unauthorized access, use or disclosure of protected health information in a way that compromises the privacy and security of that information, which has been interpreted to mean that the breach actually poses a significant risk of harm to the individual who is the subject of the information. So essentially any time information, even internally, is used in a way that either isn't expressly authorized by the patient or isn't otherwise authorized by law, that constitutes a breach. So some people, in short-hand, said that a HIPAA violation from this point forward is essentially a breach, and I think they are right. If information is used in a way that isn't permitted, even if it is internal, it's still considered a breach and it's potentially reportable to the patient and to government authorities if it rises to the level of posing a risk of harm.

ANDERSON: Along those lines, the rule allows healthcare organizations to determine whether a particular breach represents significant risk and thus needs to be reported. So what constitutes significant risk, and how can that risk be measured?

MCGRAW: It's a big question, and I'm not sure that question was very well answered by the regulators when they established a standard in the rule, and that's actually one of the things that we raised in comments. It...really puts the burden on the entities to try to figure out whether a particular breach might be harmful to an individual. Sometimes those cases are easy. If part of the data that got breached is a Social Security number or credit card information, you know that clearly raises a risk of financial or medical identity theft and so...that raises a significant risk of harm. On the other hand, if you are talking about data...about a health condition, that is a trickier set of circumstances for an institution to decide if that particular piece of health data be harmful to somebody. Harm is not just financial harm, it's also reputational harm or this generic category called "other harm," which I might want to define as harm to someone's dignity. So if the data is a diagnosis of AIDS, for example, the institutions are clued in enough to know that most patients would be very unhappy if that data were in the hands of someone who didn't have a need to treat them. On the other hand, if it's a more benign health condition like a cold or the flu, it's not entirely clear whether that data would be considered sensitive by a particular patient. And those judgment calls are going to be extremely difficult for organizations to accurately make, because if they err on the wrong side of that and the patient finds out that, in fact, there was arguably a breach of data but the institution chose not to notify them because they decided that it didn't raise a significant risk of harm, you know there is the opportunity for second guessing down the road.

We certainly have recommended that there be a lot more clarity added to that standard, and that instead of focusing on the more subjective question about whether a piece of data might cause somebody harm, that instead you look at whether the data was really compromised. What happened to the data? Did it, as a part of a breach, go outside of the institution? Did it get viewed by people who looked at it largely for curiosity or prurient interests versus a legitimate need to know? People generally don't want their neighbors to have information about their health conditions, regardless of what conditions they are. And so we suggested that HHS could add a lot more parameters to a standard and stay out of the area of subjectivity, of harm, and instead get to more tangible questions about what happens to data in a breach, and whether, in fact, it was compromised in a way that individuals ought to know about.

The standard is "significant risk of harm," and, quite frankly, there is very little guidance to institutions today on how to navigate that. We are very hopeful that the regulators will provide some more clarity.

ANDERSON: The rule requires healthcare organizations to notify patients affected by a security breach of any size within 60 days. They also must notify the Department Health and Human Services and the news media if the breach involves more than 500 individuals. Please briefly explain how these notifications must be made.

MCGRAW: First of all, I want to say that the 60-day notification period is an outer limit, and, in essence, the requirement to notify both the individuals and the government authority is triggered by when you find out about it. You essentially have to notify people as soon as possible, with an outer limit of 60 days. So it's actually quite possible that if you, in investigating the breach, complete your investigation within 10 days but sit on it for another 50 days and wait until the 60 days have passed, you would essentially be in violation of the rule. The 60 days was really intended to be...this outer limit. But if you have information ahead of time, you need to let people know as soon as it is feasible. It is important for institutions to understand that...you need to start notifying people when you know enough, i.e. there was a breach and it compromised the data, and it doesn't qualify for any of the exceptions. You can't wait for that 60-day period to be up.

The statute and the regulations actually have some pretty clear information about how that notification needs to occur, which is largely by certified mail unless the patients or individuals whose data is compromised have indicated that they want to be notified by e-mail. There are provisions for substitute notice, if you don't have adequate contact information.

The notification to the federal authorities is actually fairly straight forward, but if you have a breach affecting fewer than 500 individuals, you still have to notify authorities, but you get to do so pursuant to an annual report.

There is also information in the regulations about the content of that notice and what type of information has to be in there. So, it's actually a remarkably detailed set of provisions that was in the statute and then the regulation goes into a little bit more details. So there's not a whole lot that is left to discretion on the part of entities who are covered by the rule beyond figuring out whether you've got a breach in the first place.

ANDERSON: The Breach Notification Rule includes a safe harbor, which exempts organizations from reporting breaches if the information was encrypted in a specific way. Please describe the safe harbor's details and the implications for how hospitals and others should use encryption.

MCGRAW: The encryption standards that are in effect right now are the ones that the Department of Health and Human Services recognized officially a year ago. They can update them periodically, and I'm not sure if they will this summer or not. There is a specific encryption standard for data that is at rest, when it is sitting on your server, for example. And another standard applies when data is in motion, such as when you are disclosing for a treatment purpose or payment, or health care operations. If you use those standards to protect data at rest and data in motion, then even if you have a breach event, such someone hacking into your server or someone stealing a laptop, for example, if that data is encrypted using that standard, then you don't have to notify. You do have to notify if, for some reason, you learn that encryption was compromised, which would happen if you were sloppy and you encrypted data but you made the key widely accessible so that the hacker was able to both take possession of the laptop, for example, and then they also had the key because somebody wrote it on the back of the laptop.

Then with respect to encryption in motion, this has to do with data that might be intercepted in transport. If you've got it protected through that encryption standard, essentially you don't have to notify, because the way encryption works is that people can't really access the data.... So if you take the time to encrypt and you spend the money in encryption, you will save yourself a lot of money on the back end if you have a breach and then you have to notify hundreds if not thousands of people.

I know a lot of healthcare institutions have said, "Well encryption is expensive and it slows down the speed of transactions." Well, it does slow things down, but not to a huge degree. So you have to think about where encryption makes sense. Maybe in your emergency department, where the data needs to be immediately accessible, encryption makes little sense. But in terms of transporting a prescription to a pharmacy where 30 to 60 seconds of extra time means that you can encrypt that transport, then I would look at what the cost of encryption is versus what costs you are going to incur down the road in an event of a breach.

I would be willing to bet that if you ask folks who have recently experienced rather large-scale breaches if they wished they had encrypted the data on those stolen computers, they would say yes, because they are paying a lot to notify patients, much less pay for the damage to their reputation because of the public disclosure of the breach, and they could have saved a lot if they encrypted.

ANDERSON: The rule requires business associates to report breaches to their healthcare partners, which are called covered entities. Please define the term "business associate" and describe for us just briefly their responsibilities under the rule.

MCGRAW: A business associate is essentially a person or an entity who receives protected health information from an entity that's covered under the HIPAA rules, which is largely most of the entities that function in the health care system - doctors, hospitals and health plans. Business associates perform functions or services on behalf of covered entities and need to use or have access to protected health information in order to do that. So one example is the billing company that provides services to the physicians. Of course they are getting patient data all the time so they can process and send claims to the insurance companies and process payment for the physicians. So that's a simple example of a business associate.

Other business associates include laboratories, who receive labs and some patient data from physicians and then they process that lab and send the result back. That's a healthcare transaction that takes place with a covered entity sending information to a business associate, the lab, and HIPAA obligations apply to that business associate, not directly because they are not directly covered but they become looped in through their business associate agreement, which binds them to complying with HIPAA. Now what the HITECH Act did was to say that business associates can now be held accountable by government authorities for failing to comply with the terms of their business associate agreements and with HIPAA. So they have to do that now and they can be held both criminally and civilly liable, whereas in the past the most that could be done was for the covered entity to enforce the contract or hold business associate accountable under a breach of contract action.

So you could foresee that a business associate could experience a breach. That billing company that I talked about could have their systems hacked into, but since they don't have the relationships with the patients that the covered entities do, their obligation is not to notify the patient but to notify the covered entity. It's very tricky here because the 60-day outer limit for notification - that time clock starts ticking as soon as the covered entity knows or reasonably should have known that breach occurred. When the breach occurs at a business associate, it gets tricky because the covered entity doesn't technically know about that breach until the business associate tells them.

Most covered entities are going to want to be very clear with their business associates about notification of a breach, really as soon as the business associate discovered that it might have occurred. That way, the covered entity can be involved in the assessment about whether, in fact, it meets the definition of a breach, qualifies for any exceptions, and meets the "significant risk of harm" test, because ultimately the covered entity is responsible for doing the notifications.

I don't know of too many covered entities who are going to be terribly comfortable with letting the business associates make those determinations.

ANDERSON: To wrap up, are there other legal requirements that health care organizations need to keep in mind as they prepare to comply with the breach notification law?

MCGRAW: I think the most important message is, don't wait until you have a breach before you plan for how you are going to deal with breaches, because these things are going to happen. Even in the best organizations that pay a lot of attention to data security and are very careful, inevitably things will occur. You can't always control what is going on even within your own walls. We know from many news reports that employees unfortunately can be very curious and stray into and look at records that they don't really have any business in.

So it's going to happen and you're going to be much better off if you have some very clear policies about what happens if there is a suspected breach, and who needs to be notified within the institution, and who ultimately has the decision-making authority. You should document all that every step of the way rather than waiting until a breach happens and then trying to pull together a responsive process on the fly. You really do only have 60 days, and that's not a lot of time to do an investigation, much less start notifying people.

The other thing that I would say is consider encryption, because breaches will happen even when you are careful, and encryption is the only safe harbor that exists. When you use it, you save yourself a lot of heartache and you lessen the potential that breaches will occur in the first place.

ANDERSON: Thanks Deven. We've been talking with attorney Deven McGraw of the Center for Democracy & Technology. This is Howard Anderson of Information Security Media Group.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing healthcareinfosecurity.com, you agree to our use of cookies.