How smart will it really be to let your smart watch give away your privacy?
At CES 2015 (The Consumer Electronics Show), chair of the US Federal Trade Commission warned that the sensor-laden gadgets and devices associated with the Internet of Things – the smart watches and other gadgets we’ll wear or that will surround us – could collect data about us that will have far-reaching consequences for our personal and professional lives.
Edith Ramirez claimed that such devices pose a serious threat to privacy. The threat goes further though as distorted pictures could be built about us that are then used to corporations and government to make unfair and limiting decisions about us. Information about our “credit history, our health , our family and social connections” as well as many other indicators could end up in the hands of financial, insurance and medical institutions, to name but a few, not to mention the usual efforts to target us with advertising.
It’s an intriguing and disturbing combination – innovations that offer us better health even as they snoop on us in order to raise our insurance premium, or even deny us insurance at all. These fears are not new.
What is new are the implications of clumsy algorithms that providers use to mine your data to present either back to you or to other organisations. Inadvertent Algorithmic Cruelty recently resulted in Facebook users being faced with distressing images of deceased loved ones in their “Year in Review” feature. If similarly clunky algorithms inform decisions about how healthy we are, how fit for credit, or whether we should be allowed access to a building, then we are heading for troublesome times ahead.
Corporations are not averse to refine their algorithms through the distress of their consumers. Beta testing in public really involves waiting for the “exceptions” to report their pain and then adjusting accordingly.
This is one of the fatal flaws at the heart of evolution in the digital space. For algorithms to refine and improve, they need data. Learning tends to arise out of our processing the bad as well as the good stories. There is a strong likelihood that early versions of smart wearables will fall foul of clumsiness. The public will be the live testing ground for the development process as there is no way of proofing fully against failure. Why is that? It is because the development of the Internet of Things requires the world itself to offer it real time feedback. We are the meat. We are the playground. It will sink its teeth into us.
As the experiences of Inadvertent Algorithmi Cruelty show, it is already happening. Other examples have been in existence for years. A family discovered it was refused a loan because they had moved into a house whose previous occupants left with a poor credit rating. The particular algorithm for deciding on credit was based on address, not person. It took numerous complaints and stress of family people for the “rules” to be refined.
We might decide we do want health data fed directly to our doctor. There are obvious benefits. It can save time. It can improve care and diagnosis. But do we want the same data used to build a distorted picture of us and put into the hands of insurance providers or future employers? As the internet of things rushes from the future to meet us, and with the same providers of digital gadgetry and social media buying up the tech startups, the perfect storm may well be brewing where the corporation becomes the guardian of our moment by moment physical (and even mental) behaviour.
This might sound like scare-mongering. It isn’t. It is another, perhaps sober, call to beware of being “locked in” without your agreement, to make sure that, in the clamour for wearable miracles, you don’t give up your privacy and freedom in the process.
Paul Levy is the Author of Digital Inferno