Could your beloved robotic lawnmower or sweet smoke detector turn out to be two-faced? In light of recent news, our Copenhagen-based Design Director, Mark Weedon got to thinking about the amount of trust we give connected products and the responsibility brands have to consumers to keep that trust.
If you’re reading this article, then you’re most likely a bit ahead of the curve when it comes to technology. Maybe your house is lit up with smart lighting and you control your heating through an app on-the-go, before arriving to a warm and cozy home. If these items are still on your wish list, then you’re still probably aware of the growing number of so called ‘smart home’ devices and the need for mastering what they can do.
You’d probably be in shock to hear then that these “sweet little helpers” have the potential to be controlled by others and for very sinister means. Last week, the first real large-scale use of the IoT Botnet was exposed, with hackers using an army of vulnerable or unprotected ‘connected’ home devices to swamp some of the world’s largest websites and take them offline. To most of us consumers, this feels like a problem for other people to worry about, but the reality is that any of our connected home products can be enslaved and instructed to do things without our permission.
In light of this recent event, the following question begs to be asked: what kind of relationship do we as humans want to have with smart technology in the future?
Personification of tech really isn’t anything new. We all know someone who talks to their computer, phone or tv in soothing tones to get it to “play nice”. Most of us also seem pretty comfortable with having Fido, the robot vacuum cleaner, roam around the house, curling up under the sofa to charge and “rest”. What is new however, is the ability our products and services have to respond and in essence, talk back to us. The result is that we’re increasingly seeing tech as something near-human. But does an highly personified piece of tech (think Siri, Google now or Alexa) command more trust? And is that trust duly justified or fitting to the situation it serves? – The extreme of this dilemma is the AI self-driven car. Only last week we were presented with the ethics of these responsibilities from Mercedes with its take on the ‘trolley rule’ – prioritising the driver’s life over that of pedestrians.
Smart products now control the access to our homes via physical locks and alarms and manage our lighting, utilities, needs for comfort and anything else we may be able to think up in the future. The American retailer, Target, has even released a range of connected toys aimed at helping parents feel secure that their newborn babies are healthy and stimulated. As consumers, we need to feel comfortable that these products can be trusted, in the same way we wouldn’t put a stuffed toy in the arms of a child without knowing what it’s made of. Why should technology be any different?
So should we worry about how harmless our connected products are? Well, perhaps not quite yet. The point is however, that the trust and responsibility we give these connected and increasingly personified objects needs reciprocating by the brands that provide them. Without that trust we risk ending up in a situation where our homes aren’t the safe and relaxing places we have come to expect. If these devices can be somehow be commanded to form a ‘zombie army’ of products that have access to the most intimate places in our home, brands should be obligated to provide solid security requirements for their products.
The more the market becomes saturated with smart home products, the more we should be interested in seeing how brands can better build relationships through connected experiences and how a product’s personality will affect the relationships we as consumers build with technology.
What do you think? What could be a responsible level of ‘personification’ for connected products and how can this be leveraged by brands to connect better to consumers? Let us know.