In business terms, the title of this post might be “Who owns privacy protection?” In layperson terms, it might be “Hey, that’s none of your business, Business.” But the issue of personal-versus-corporate accountability is actually the underlying heart of our privacy-versus-security debate. Put another way, are individuals data-savvy enough to protect our own identity information, or can we trust Big Business to use data in responsible ways?
Since collecting the data isn’t necessarily wrong or illegal on its own, the burden of using that data responsibly is usually considered the responsibility of business. But it’s important to know what businesses are up against in searching data and what their goals are. The data is often used to understand previous behavior and, as Infoglide’s Scott Fitzgerald pointed out in an article on National Underwriter, it can be used to minimize future risk as well, both of which save us all money.
In discussing insurance fraud, Fitzgerald argues that
“Modeling based on historical claims is often effective, but it is not good at predicting future trends and can’t adapt to real-time changes in fraudster tactics.”
That kind of real-time adaptation depends on innovative data usage and is becoming absolutely imperative for organizations (whether insurance, financial, retail or the government) as criminals become more and more crafty in their use of technology. Bad guys want to get the data; businesses need the data to head off the bad guys.
A post on the sp!ked blog offers another interesting viewpoint on the debate. In it, Alex Taylor, a member of Microsoft’s Socio-digital Systems Group, suggests that privacy debates, and what he calls the “paranoia” surrounding them, often make concerned individuals seem like “cultural dopes scarcely capable of managing our self-presentation in public and private.”
He uses Microsoft’s Whereabouts Clock as an example of a technology that intruded on individual privacy to some degree but that people were happy to use for the sake of convenience, and because they were able to make smart decisions based on their understanding of how the data in the technology was being used.
Taylor says that those decisions are going to be the basis for real privacy protection. He argues that the trick is in
“designing technologies that reveal enough of their workings to allow people to make intelligent choices about when they might want to share and when they might want to keep things to themselves.”
Similarly, Fitzgerald claims that “the buildup of data across silos and applications” in business sometimes makes data usage difficult. However, his answer to Taylor’s question about technology hinges on data-searching tools that create risk scores instead of just revealing potentially sensitive data. That way, individuals can be sure that personal data isn’t just thrown around the business world and businesses aren’t able to improperly use (or even access) data that might compromise personal privacy.
What do our readers think? Who should be accountable in the privacy-vs-security debate? Leave a comment and let us know.