J38701 CadenceTECHTALK Automotive Design Banner 800x100 (1)
WP_Term Object
(
    [term_id] => 97
    [name] => Security
    [slug] => security
    [term_group] => 0
    [term_taxonomy_id] => 97
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 298
    [filter] => raw
    [cat_ID] => 97
    [category_count] => 298
    [category_description] => 
    [cat_name] => Security
    [category_nicename] => security
    [category_parent] => 0
)

Privacy – the Other Face of Security

Privacy – the Other Face of Security
by Bernard Murphy on 01-13-2016 at 7:00 am

Security gets a lot of tech press, privacy not so much. A lot of the problem is that while we each know intuitively what we mean by privacy, pinning down an actionable definition is surprisingly tricky, especially when we require that it not intrude in other ways on our rights. Privacy rights are not absolute (you don’t have a right to hide taxable income or criminal activity), they are context-dependent (maybe OK to aggregate certain kinds of information but not all, also OK to reduce privacy in times of danger) and they are culture-dependent (not all cultures, even European cultures, share Anglo-American views in the details). These ambiguous and variable characteristics are perhaps why technology development in this area moves more slowly than in security.

One example of context-dependence is how your medical data is handled. Most of us would consider this data to be very sensitive, yet common IoT-based approaches to gathering data need to send it from sensors through multiple clouds (since different sensors will not necessarily use the same cloud) opening multiple opportunities for information to be leaked before it finally reaches your EHR (Electronic Health Record). ARM has partnered with industry providers like HeartToHeart and NeuroSky to enable your doctor to collect data direct from your sensors, get your approval to acknowledge that it is your data, then send it directly to your EHR without having to go through 3rd-party clouds. Data is kept secure while in transit through use of the Trusted UI.

Then again, one of the promised advantages of mHealth is the ability to aggregate data for research to detect trends, local environmental health concerns and more. Therein lies another privacy problem – the opportunity for someone to access your personal information to use for purposes you would not approve (such as canceling your insurance or stealing your identity). The standard approach to this problem is to “de-identify” the data. Directly-identifying fields (name, address, SSN) are deleted and localizing fields are generalized (e.g. city is generalized to county or state). This was thought for some time to be sufficient to avoid invasions of privacy but that’s probably not the case. A Harvard researcher was able to join a de-identified medical database with a voter registration database (which does contain names and addresses) to uniquely identify the medical records of the Governor of Massachusetts at that time. The National Institute of Standards and Technology (NIST) has asserted that no foolproof methods of de-identification are known at this point. That means that a fairly sizable potential for leaks in privacy remains around aggregation of personal information.

An even more contentious area of privacy is around government monitoring. This is a topic that seems to inspire almost religious fervor among both opponents and proponents, which is doubly challenging when trying to advance technical solutions (where will emotion, reason and technology intersect?). We have a right to be protected from intrusive government monitoring but we also have a right to reasonable safety and that cannot be accomplished by requiring police and security forces to protect us against new-world threats using old-world tools. So the problem is one of degree, not principle. Under what circumstances, with what authorization and by what means should the government be able to monitor?

David Chaum, responsible for many of the ideas in on-line anonymity, has one idea. Provide a secure end-to-end encryption system with a backdoor, but one which can only be opened with approval from multiple governments. Intriguing, but seems to me to almost impossible to use – you need authorization from nine governments to access the backdoor, which makes it practically ineffective for all but the gravest-possible threats. A more practical approach I would think would be a backdoor which can only be opened, per key, for one target person. A new key requires a new authorization and a new generation; and of course key generation must be separate from the people wanting they keys. This might allow targeted tapping while making blanket tapping close to impossible.

So there is plenty of room for new technology development in de-identification and in end-to-end encryption with safe backdoors. I am a technology optimist, but grounded optimism starts with understanding the holes so a complete solution can be developed. The work ARM is doing is a good example of understanding and an effective engineering response. I’m hoping to see more of this around de-identification, access to aggregated personal data and mechanisms for limited government access to combat criminal activity.

You can learn more about the ARM-based trusted health solutions HERE, more about problems with de-identification HERE, and more about the David Chaum direction in on-line privacy HERE.

More articles by Bernard…

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.