Will all this new med tech compromise our privacy?

Avatar Of Jessica Baron, Phd

CES 2019 gave us a peek into the future of medicine as vendors showcasing everything from new phone apps designed to help diagnose disease to digital in-home healthcare assistants. But a central theme of this new tech that could turn out to be just as harmful as it is helpful was the integration of data from multiple sources and the ability to share that data with physicians and other caretakers.

On the surface, it looks like digital medicine will make life easier for overworked physicians and those in need of medical intervention. And to some extent, this is true. A watch that helps monitor your heart for abnormalities will allow you to get to a doctor before a heart attack or stroke.

Behavioral tracking can detect everything from symptoms of depression to signs of an impending opioid overdose.

But all of this comes at a price.

While you might not think your blood pressure is something that other people are interested in, the number of hacks on medical records and other health data over the past decade proves that this information is valuable to bad actors. They’ve already used it to humiliate and blackmail hospitals and expose the vulnerabilities of health care providers.

Hospitals have paid out bitcoin ransoms to hackers to prevent its exposure (or regain control of their systems), but there’s no guarantee the data hasn’t been copied for later use.

And it’s not just malicious hacks that put your data at risk. Often, healthcare infrastructures are merely insecure, or IT workers are careless, and data end up exposed because of a missed keystroke. Data breaches (which include both hacking and accidental loss or exposure) have occurred almost 350 times PER YEAR in the last seven years, with some incidents affecting millions of people at a time. At this point, it’s best to assume your data, including your name, address, disease history, lab results, pharmacy, medication information, and more, is out there somewhere, waiting to be exposed.

In a 2016 report prepared for the U.S. Senate by the Institute for Critical Infrastructure Technology titled “Your Life, Repackaged and Resold: The Deep Web Exploitation of Health Sector Breach Victims,” think tank researchers revealed how this data is being used. For example, it is readily available for buyers on the dark web.

Now, new medical monitoring devices collect real-time data that even doctors (or patients) didn’t have access to in the past. The TestCard home urine test will allow your phone to read (and record) the results of pregnancy, drug, sexually transmitted infections, and prostate health. Wearable watches and pendants measure your activity, heart rate, sleep patterns, and breathing. Smart home devices can monitor your movements and record the questions you ask about your health plan. This is an awful lot of personal data to transmit over Wi-Fi and Bluetooth.

Many companies now mention in their press releases that they have new and sophisticated security features. And while that may be true, hackers are getting more sophisticated every day as well.

And your data is not just valuable to hackers. It can potentially be used against you by employers and insurance companies. The more data you produce, the more it will whet the appetites of companies wanting to collect and store it. They will offer incentives such as discounts on insurance plans and other health devices upfront, but can you be sure they won’t use the data to discriminate against you later? Even if they anonymize your data for research purposes, are you aware of how easy it is for hackers to put the information back together again?

Americans will need to be especially careful of data sharing as it might reveal preexisting conditions currently covered under the Affordable Care Act, but might not always be.

Right now, we don’t have explicit policies governing data gathering from new devices or its use by employers. And as the gig economy grows and employee status is less well-defined, workers won’t necessarily be able to count on the Equal Employment Opportunity Commission (EEOC) to help them fight this kind of health-related discrimination.

And while there’s much more to think about when it comes to new health tech (like whether you want Siri to deliver your lousy health news), one of the most overlooked (or at least under-publicized) issues in all of this new health-monitoring technology is whether the right people have the time, desire, and ability to even look at it.

Sure, we have electronic health records and more sophisticated and interconnected databases for doctors to consult, but those aren’t always complete, reliable, or available to everyone. Are you going to buy and then rely on a device that says it will alert the doctor of your ECG readings before checking to see if the doctor’s office has any infrastructure to receive or process those results? Imagine the malpractice nightmare it poses for medical staff who miss a text message because they are overwhelmed or undertrained – it might be a long time before they’re willing to climb on board this particular tech bandwagon.

So, to what extent are all of these promises lulling us into a false sense of security about our health data? Is your quantified self all that interesting or helpful? While you walk around wearing a device collecting your most confidential information, which is using it?

It’s beneficial to get a better picture of our health to make adjustments and feel empowered to take control of our lives. Still, it’s equally important that we read the fine print on new devices, ask hard questions of the companies marketing them to us, and talk to our doctors about whether or not this new tech is essential in our medical care right now.

Exit mobile version