Mental models
In my exploration of the realm of privacy, usability and the Internet of things, I shall take a moment to discuss mental models.
They are formed as a result of one's interaction with a system, they represent a person's belief about what is going on inside as they do so.
For practical purposes, you have to be familiar with these concepts (others will be revealed later):
- system image - what the system does, de facto
- user's mental model - what the user thinks the system does
- conceptual model - what engineers want users to think about what the system does
When they are aligned - users are happy because the systems are predictable and people always get what they want.
An inaccurate user model causes friction - the system doesn't do what users want or it does it somewhat differently, another possibility is that the system works, but with some undesired effects.
User models throughout history
Take a few steps back and put yourself in the shoes place of a prehistoric human (shoes were not invented yet, so come closer to the fire to keep your feet warm). Look around and make a note of the tools you see.
Let's take this instrument as an example - it is a stone axe. It fits in your hand comfortably, it is easy to use, as training is not required to figure out how it works (but with more experience you learn how to use it more efficiently). In this case, the instrument is the interface, there's nothing else.
If the stone breaks because you hit something harder with it - the interface breaks too, the reason of failure is obvious and in a few moments you're looking for a replacement.
Fast-forward a bit and have a closer look at this item. The wheel was a disruptive technology for its time, it took several millenia(!) after pottery and agriculture to come up with this idea.
Despite the fact that the invention required a monumental leap in thinking and engineering skills, you have a clear model of this artifact. If it breaks - the failure is obvious and you're not left wondering about the cause of the defect.
This example illustrates that something as non-trivial as a wheel can still have a clear interface and facilitate the construction of adequate user models.
Come a few thousand years towards the present and examine this abacus. This technology requires the understanding of abstract concepts such as numbers and arithmetic.
However, even this complex instrument has a straightforward interface. Once you learn how to use it, there are no surprises. An abacus user will not end up "turning it off and on again" or "pressing harder" because of some unexpected error.
Fast forward to the present, look around you and find items that have a simple interface and for which you have a clear model in your head. Let me know what you've found.
Modern instruments
What does this artifact do?
Technological progress forced us into a state of unawareness; the complexity of our instruments increases at the expense of our ability to comprehend it.
You're most likely reading this text off an electronic screen - do you know how it works? What about the infrastructure that was required to deliver the text to you over the network?
Many instruments we use on a daily basis are beyond our grasp. We don't know exactly what goes on inside, nor do we know how to fix them when they're broken.
The solution - designers have to wrap the complex system into a clear interface that would aid users in building the right mental model.
Conceptual models
If there is a click-pen somewhere around, you may have thought of it when I asked you to find instruments for which you have a clear mental model. The pen is a good example of something complex hidden under an easy interface.
A model of such a pen might be described as
You click it and then you can write, you click it again and the tip retracts.
Reality is much more profound though, have a look at the "Engineer guy" explaining
">the inner-workings of a retractable pen. It is a complex mechanism most of us are never aware of, and we don't even have to be - that does not preclude us from using these pens to write novels, take notes or sketch portraits.
This is what conceptual models are about. It is not what the system really does, it is what designers want people to imagine about the behaviour of the system.
Why hide things from end-users? Sometimes too much information is not a good thing. When you drive a car, do you really need to understand the underlying chemical reactions and the Carnot cycle? And when you use a pen, do you really need to know about the clever patented mechanisms that make it work?
The Internet of things and mental model inertia
The IoT ecosystem is facing an uphill battle against existing mental models. Imagine a "smart scale" - one might think of it as "I step on it and see my mass" and they'd be only partially right. What the device really does might also involve logging the data, sending it to a server, posting a status on social networks on your behalf, etc.
Apply the same logic to a smart thermometer, fan or cofee-maker. People buy them and plug them in, thinking that these new devices fully match their mental models, but that can be far from the truth.
Depending on how the "smart" device was implemented, it could actually be a fully blown computer with impressive number-crunching and storage capabilities. If you're dealing with something based on RaspberryPi, it is most likely running an actual Linux distro on it - so it can do whatever a conventional computer can do: mine bitcoins, send emails, host a web-site, make phone calls, etc.
This is one of the reasons why we're experiencing extremely large-scale distributed denial of service attacks powered by IoT devices - attackers are leveraging all that power that users are unaware of.
On the usability front, the bottom-line is that an IoT device lacks several fundamental qualities our earlier instruments had possessed since prehistoric times:
- You don't know how it works and what it really does inside
- It is difficult or impossible to distinguish the different states in which the device can be (for example, if those microchips were powered up and doing something - would you be able to tell whether they're adding numbers, subtracting them or doing something completely different, just by looking at them? And if it were experiencing its own version of a "blue screen of death" - would you be aware of that?)
Possible research directions
Research can aid us in finding ways of overcoming mental model inertia that plagues the IoT world. Here are some hypothetical solutions that might work.
"New" devices = new models
If new devices are not marketed as "smart" versions of their ancestors, people won't try to reuse the same model and build a new one instead. When you start from scratch, data storage and sharing capabilities could be promoted as fundamental features - thus forming a model that is more consistent with the system image.
For example, instead of "smart thermometer" call it a "medical tricorder". Would this make people regard it is a new class of device and stop applying the thermometer model to it?
The "are your lights on?" effect
This refers to Gerald Weiner's book with the same title, where the author tells the story of a team of tunnel engineers that addressed the problem of drivers forgetting to turn their lights on or off, depending on the time of the day. The solution was a gentle "nudge" that delegated the problem to the drivers themselves in an elegant way.
For example, instead of giving users a link to the privacy settings, the interface could ask "who else can see your temperature readings?" or "continue sharing data with strangers?". Would this reminder be sufficient to convince people to adjust their settings according to their needs?
Other models
It is worth pointing out that there are other models too, for example an engineer's model is what the system's author thinks about what it is doing. When the engineer is worthy of their title, their model fully matches the system image. However, they can diverge in some cases.
To illustrate that - think of someone who designed a web-site that displays "Hello world" on the landing page. They understand HTML and CSS, but that does not imply that they understand how the OS instructed the video card to visualize those pixels, or how the browser connected to the web-server to retrieve the code of the page over HTTP.
In this case the designer's model diverges from the system image because they were operating at a high level of abstraction, without having to go "under the hood".
Divergence can be the effect of incompetence as well - a programmer slapped together a few instructions copy/pasted from a forum, the program compiled and they were satisfied. Superficially the program works well, but it may have side-effects the programmer is unaware of.
Sometimes divergence is caused by slips - the programmer knew what they had to do, but they accidentally wrote `=` instead of `==`. In this case they think the system behaves in one way, though in reality it does something else.
Thus, the engineer's model is far more detailed than the user's model, it can be fully in tune with the system image, and sometimes it can diverge as a result of human error, incompetence or limited knowledge. Such distorted models might be referred to as the "sloppy engineer model". This is not official terminology, so take it with a grain of salt and focus on the essence.
Straightforward instruments
Perhaps you saw a pair of scissors or a knife, or maybe an eraser or a ruler. Pencils are a good example too - you know that it is graphite in a wood cylinder and the length of the pencil tells you how much "fuel" there is left (unless you're in the joke in which a manager decided to optimize pencil-production by reducing the length of the graphite, because "nobody actually writes with a pencil that short").
Which ones did you find?
Pages: 1· 2