Intelligence tends to be about creating an autonomous consciousness. But the only way to make this technology relevant for people is to design artificial 'smartness' instead of intelligence. The challenge is not autonomy, it's relevance. The technology should be focused, measurable, and specific to enable the user to think more clearly. We need Artificial Smartness to overcome the complexity of technology, so we can be humans again.
“I’m afraid I can’t do that, Dave”
If you’ve ever seen Stanley Kubrick’s 2001: A Space Odyssey , you might have become a little skeptical about Artificial Intelligence (AI). The computer of the future, named ‘HAL 9000’, was integrated into a spacecraft. It controlled all of the systems aboard the vessel. HAL’s voice interaction functionality, as well as its ability to learn and make ethical choices, made it very human.
Unfortunately, the system was a little too conscious, and it did not end well for the astronauts on board. If we consider the fact that a ‘HAL’-like intelligence could be programmed inside our cars, we would feel very insecure. It could control the door locks and prevent us from exiting the vehicle when we chose to. Or worse.
This is, of course, a sci-fi/horror scenario, but the truth is that we are now at the point that it is technically possible. Now is the time to think about how we want this type of ‘intelligence’ to enhance our lives, and in what direction we are leading the technology. Or, is it already leading us...?
Teaching tech to learn
In What Technology Wants, Kevin Kelly says that technology, in a way, chooses its own direction and will continue to evolve, whether we like it or not (1). If we want to understand why tech is leading us, we must zoom out to its origin. The first game-changing influence by technology was the invention of language. That happened 50,000 years ago. Our Sapiens-ancestors were pretty smart, but with the growing ability to communicate, they could now pass their knowledge onto others. That gave evolution a big boost. This ‘learning tool’, the technology of communication, enabled them to learn a lot faster than other species did, and that great advantage accelerated human evolution.
We have now arrived at a moment in which technology will greatly influence our lives. The next big step for evolution is to extend our ability to learn to machines. Just like our previous learning enhancement, this will boost tech development in a big way. This is a crucial moment in evolution, because it’s our responsibility to steer technology in the right direction. Before tech gains the power to become smarter on its own, we have to decide what course it should follow. We cannot stop the process. We can only be aware of the route it is taking, and teach it to be nice.
We’re not ready for this
We’re putting more and more intelligence into all the objects around us, and that is great. It allows us to be more efficient and might be the next step in our human evolution. The only problem is that it’s coming much, much faster into our lives than we can handle. The plough and the steam train were pretty scary when they were introduced, but they entered our world at a slow pace. Because new technologies like AI evolve so rapidly, our ability to adapt becomes an issue. Mankind cannot cope with evolution at this scale and speed. Once humans develop intelligence that is truly artificial, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, will not be able to compete and will be superseded.
The next big step for evolution is to extend our ability to learn to machines.
Control, acquire and apply
Because the future of AI technology lies in our own hands, this is a good time to define what it is, or what it must become. ‘Artificial’ means ‘man-made’, or to be more clear: it’s something we create and control. Looking into the future, its definition could be extended into ‘man-controlled’, which means we allow it to be created and to exist. We don’t define every little detail, but instead we allow it to learn, make mistakes and adapt. It’s similar to the way we co-exist with nature; we help it grow, and control it in the environments we own.
‘Intelligence’ means ‘the ability to acquire and apply knowledge and skills’. That all sounds pretty harmless, but I can’t help but think about robots taking over the world. When tech has this ability, it can also survive on its own. It doesn’t need us humans. It becomes the ultimate ‘super being’ that can decide for itself. When the evolution of technology is pushing towards that level of intelligence, the crucial question is, do we allow robots to have a conscience? Can machines learn to experience true emotions and feel the impact of their behavior?
The line between man and machine
‘David’, from the movie Prometheus, is not just a passive computer. He is a proactive digital assistant and has a mind of his own. He is more than just a digital chess opponent. He suggests starting a game and has even developed the desire to win. He was built with a ‘boredom feature’ and can get really upset when he loses. If something displays these kinds of emotions, we will arrive at a point where we need to acknowledge its intelligence. Will we see ‘him’ as ‘one of us’, or should we treat ‘it’ as a computer program doing work for us, entertaining us, serving us?
In the movie, David is considered ‘one of us’ and has a great responsibility to protect the crew onboard the spaceship. On behalf of the human crew, he can decide what is right or wrong. He can choose to accept collateral damage for his programmed version of the greater good. It’s a robot that develops its own set of rules, and kills people when they are in his way.
Again, this is fantasy, but we do seem to find it intriguing. In almost all movies about futuristic robots, the antagonist is a computer system that is ‘intelligent’ and makes decisions that we humans would not make. Both in the movies and in real life, it turns out that the intelligence we build doesn’t always meet our expectations, even if the system does what it’s programmed to do. This actually means the technology is not broken, our design process is.
Following the ‘rules’
When designing an intelligent system, we are also challenged to resolve certain ethical questions that are related to AI. An example of this is the ‘KARR’ character you might know from the TV series Knight Rider. Michael Knight drove ‘KITT’; the smart car that was programmed to protect humans. His opponent, ‘KARR’, had a different setting and was programmed for self-preservation. This critical nuance illustrates the problem that can occur when a machine gets to decide what is wrong and what is right. You can teach the computer to ‘understand’ a set of rules, but the strict interpretation of these rules might get people into trouble.
The programmed set of rules often does not apply to unfamiliar situations. The system is designed for a restricted scenario and has simply not been taught how to deal with new variables. The guided principle in the evolution of humans was to survive and to develop robustness when it came to unexpected situations. Although we can provide some guidelines to a technological system, the set of rules will not always apply, because we cannot anticipate every possible situation in which the system may find itself.
There are two things we understand about unexpected things: they happen all the time, and when they happen, it’s unexpected. Because a machine is not very good at dealing with unpredictable things, it might be a better idea not to trust a computer for tasks that require a human brain. We can design a system that automates a lot of the predicable stuff, but it’s very hard to predict all of the variables that will cross its path, and that’s why humans should still be in control.
When errors become fatalities
A lot of AI scenarios are science fiction, but with the arrival of the Tesla car, many of these Hollywood examples suddenly became a reality. It was all over the news when the self-driving car made a misjudgment in a traffic situation, which resulted in the first casualty caused by an improperly programmed car. The software interpreted the situation according to the programmed rules, so you could conclude that the car did exactly what it was instructed to do. And that is actually the core of the problem: a human did not design a solution for this specific situation. A human error was the cause of the accident. Not the driver’s error, but the designer’s.
Computer Science researcher Donald Norman (2) puts it this way: “The real ‘intelligence’ is in the designer’s head, but he simply cannot think of everything.” The specific context of the accident was not factored into the design. This particular scenario was not thought through in the design process. When quality and safety statistics of a design are at an acceptable level (self-driving cars are involved in fewer accidents than human drivers), we tend to direct our attention towards new, sexy features. The development backlog is probably still filled with unique, unexpected scenarios that should be solved, but the priority has shifted over time. Other features, like automatic parking and animated turn signals, now simply have a higher priority.
Forget HAL, we need a pal
I’m confident that, over time, self-driving cars will be safer than ones with humans at the wheel. The challenge is that we don’t have the time to develop the robustness we desire. It took humans millions of years to be agile and to adapt to situations that we had not encountered before. But with computers, we don’t have that time. Instead, it would be a whole lot more efficient, from an evolutionary perspective, if we would use computer systems to complement us instead of outsmarting us. We don’t need a replacement driver. We need a safer experience.
Machines should be our companions and extend our abilities instead of replacing them. From that perspective, the ‘intelligence’ element in AI should be mainly about how to appropriately apply knowledge and skills, and less about acquiring knowledge and skills. The evolution of this technology should be directed primarily towards becoming ‘Smart’, instead of trying to be ‘Intelligent’. We must design the technology to be symbiotic with its users.
Finding the symbiosis
Technology has to support the user and function as a smart companion. It must be in balance with the one using it. Smartness is all about collaboration and communication, and its efficiency is all based on trust.
The relationship can be compared with a jockey riding his horse. The jockey is an experienced rider and trusts his companion. The horse is smart and has a mind of its own, but listens to what the jockey wants. The jockey is in control, but also listens to the horse. They form a good team and both have the ability to override and convince each other when needed. This is exactly how the relationship should be with machines. We should not strive for robot autonomy; we should invest in making machines smart, so we can benefit from it. There are already a number of examples in which a combination of human players combined with bot technology outperformed a team of just human players or a bot-only team. Humans aided by collaborative algorithms are the centaurs of the future. Both horse (the smart system) and jockey (the human) in one powerful entity.
How do you...?The presence of a smart system will influence how we interact with the world. If we map the Smart technology in Bill Verplank’s Interaction Design model (3), we can conclude that a Smart machine will enhance the “How do you know...” part. It will present us with additional data and options, with which we can choose to interact. It helps us to understand the world and provides insights we could not acquire by ourselves. Once we start to trust the suggestions that are presented, our perception of the system will change and start to influence how we feel about the world. We will adjust our behavior.
The key thing here is still the relationship. Artificial Smartness has the opportunity to make the technology around us more personal; it’s contextual design on steroids. People can already use big data to assist them in their daily lives, but understanding the context of the use and providing the right information at the right time on a relationship level is the ultimate form of contextual design. It is beyond profile context and big data; it’s not just calculating what most people would do; it knows you. The interface adapts to your needs because you are partners; you have a relationship with it.
Artificial Smartness has the opportunity to make the technology around us more personal; it’s contextual design on steroids.
The birth of Artificial Smartness
While we were dreaming of robots taking over the world in the 1970s, Douglas Engelbart (4) proposed ‘augmented intelligence’. This was not an attempt to replicate or replace people. He was thinking of the computer as a ‘tool’ that extends and empowers us. We became ‘users’, not just programmers or operators. And our own ‘intelligence’ was there to help us.
With the re-emergence of AI we are currently experiencing, the question of ‘doing something for me’ vs. ‘allowing me to do even more’ is once again a hot topic. The first one will take over our evolution, while the second one will enrich our lives.
The benefit of Artificial Smartness is that it gives us more processing power than our minds can handle. The goal is not to make anything self-driving, but to assist us in driving more safely. The focus for further development of this technology is to find common ground challenges that require both the skills of human adaptability and raw computer power. We should
invest in the relationship with tech and should apply Artificial Smartness to make us more human again.
1. Kelly, K. (2011), “What Technology Wants”. Penguin Putnam Inc.
2. Norman, D.A. (2013), “The Design of Everyday Things”. Mit Press Ltd, Massachusetts.
3. Moggridge, B. (2006), “Designing Interactions”. MIT Press Ltd, Massachusetts.
4. Engelbart, D. (1962), “Augmenting Human Intellect: A Conceptual Framework”. Stanford Research Institute.
Dynamic Design Magazine
This article is part of our Dynamic Design Magazine, Spring 2019. Download the complete magazine here. Do you want to continue the conversation about this article? Or are you interested in receiving a print edition of the magazine? Reach out to Mirabeau's Creative Director Henk Haaima.
About the author
Paul Versteeg introduced the concept of Dynamic Design: the design should always reflect traces of the user and its context. The way people behave and interact with a service is the key inspiration for the interface. Paul Versteeg is UX director at Mirabeau, a Cognizant Digital Business.