Dear Friends,
Be Well.
David
I think it goes without saying that humans are flawed.
<pause for dramatic effect>
I know. It’s a shocker.
So when we think of the things humans are capable of
doing, there’s always a margin of error, isn’t there? There’s always some
bell curve that factors in because we know that we’ve got our short comings.
We’re not perfect.
Which is weird, since we are constantly asking the
machines we build to be. Especially the ones that awe and fascinate us
the most.
Yeah, I’m talking about robots.
Up until now, the concept of a perfectly constructed
robot was just that; a concept.

Such a KIND face… (Cyberman, Copyright BBC)
Our movies and video games tend to depict robots as
these inhuman goofy types, or seething, wrathful, intrinsically flawed things
that either take on far too many human traits (Cylons) or not NEARLY enough (Cybermen). In any case, our creative little minds tend
to presume that robots are going to lean to the extremes.
But that’s just fun fiction.
However…
What if I told you that there was a process being
developed that allowed scientists to implant a very human like thinking process
into a very non-human robot brain? Would you panic? Because if so,
I’d stop reading now. And maybe seek out some calming tea. Or
professional help, depending on the severity.
Because it’s really happening, folks, and it’s going
to change the way we think about Artificial Intelligence in a number
of ways.
It’s called the Adaptive Character of
Thought-Rational architecture, or ACT-R, courtesy of the Naval Research Laboratory (NRL). So what does it do?
According to the recently released White Paper, a cognitive architecture is a set of computational
modules that, working together, strive to produce human-level intelligence.
I’m just going to let that sink in for a minute there.
“But wait,” I hear you saying, “didn’t you start off
this blog talking about how humans are flawed?”
Yes. I did. That’s what makes this
all the more exciting. They’re not trying to create the perfect,
godlike deathbots of SciFi lore and repute. Rather, they’re creating a
synthetic version of people, so to speak.
No, wait! Don’t panic. Let me explain.
Thinking like a person means thinking
imperfectly. We remember things strangely. Our memories
degrade over time. We let our emotions guide us. Humans
are driven by patterns and associations and experience over facts and deductive
reasoning.
As it turns out, that’s exactly what these scientists
are trying to capture.
The Soar architecture uses a modest set of building
blocks to achieve human intelligence, including different types of memories
(procedural, semantic, episodic) and different types of learning
(reinforcement, chunking, semantic learning, episodic learning).
Learning is the key point there. Not retaining
information in a database, but actually learning.
These scientists are using ACT-R and ACT-R/E (Adaptive
Character of Thought-Rational/Embodied (ACT-R/E) architecture) to build better,
more comprehensive models of human cognition and leverage these models to
improve the robot’s ability to interact with humans. So why is this
architecture so unique?
Because it’s designed to model human mentality by
placing an emphasis on the limitations of human cognition.
These robots are trying to “get us” down at our
level. Well, that’s an interesting idea. But before you
act insulted, consider this: the argument is that robots who understand people
are, ultimately, better teammates and more natural computational agents. I
guess they have to be able to think like us in order to be efficient and
productive for us.
Not to get too philosophical on you, but what does it
really mean to think like a human?
It all comes down to how we remember things.
For example, say you meet someone for the first time
at a party. They tell you their name, and if
you aren’t completely disinterested in them you will likely try to
remember it. When a person remembers something, they do so by using a
series of patterns. Your mind will try to tie the new information (the
name) to defining factors (face, voice, clothing, etc).
When you see this person again, you try to use certain
trigger cues. You see the hair, or the face or smell the perfume and your
brain tries to tie the new information (the name) to those things. Priming
from contextual clues could provide the boost you need in memory activation,
and the earlier rehearsal of associating those things together would likely be
enough for you to remember the name of this person. Ideally.
Then again, we’re not perfect. You might end up
calling them by the wrong name a few more times before it sticks.
Anyway, the ACT-R works insomuch the same
concept. When the robot (or model, as they call it) is introduced to new
information like this, it uses a similar structured pattern to remember
things. So this information is not just being dumped into a memory bank
as raw data to be regurgitated on command.
Rather, it becomes a piece of information that’s
associated with other things.
Octavia and a human escort (photo courtesy of the
Naval Research Lab)
At a high level, ACT-R is a hybrid
symbolic/subsymbolic production-based system. That means everything is
connected to everything else in order to create a memory.
How do they do this? By using a system called
Specialized Egocentrically Coordinated Spaces, or SECS. This enables
human-like, cognitively plausible spatial reasoning.
This architecture is more than just retaining
information as it comes in. As we all know, our bodies tend to function
as a whole; that is, memory retention is often a result of the sum of our
parts.
The ACT-R/E model is designed to act as a consumer of
visual information provided by external visual systems. Senses – like
sight, sound, environment – all play a part in how we absorb and interpret
information around us. This architecture wants the robot to get that full
memory-making experience as well.
One of the recent threads in cognitive science has
been embodied, or grounded, cognition.
The focus has been on showing that the body has a
major role in shaping the mind. When the motor and visual modules
participate fully in the spreading of contextual activation, it is possible for
a robot to learn which objects are best grasped with which motor commands.
Basically, these robots have the capacity to
“understand” all their working parts, and those parts can work together to form
information. So if it talks like a human and thinks like a human, that
doesn’t mean it is a human.
Speaking of us living, breathing specimens…
There are some things about these robots that deviate
from the standard human procedure. Things like fatigue, emotional
instability, unpredictability, sleepiness, weepiness, derpiness, they’re
all intrinsically human aspects. Aspects that robots
have no real reason to contend with, though that hasn’t stopped some SciFi
writers from exploring the possibility of having depressed, mopey robots.
Anyway, that doesn’t mean these robots cannot be
taught how to approach humans by understanding what makes them
so crazy *ahem* interesting. The high-level goal behind
this is to give robots a deep understanding of how people think at the process
level in order to make them better teammates.
They’re doing this by equipping robots with the
functionality to understand human behavior – like right vs wrong – and use that
information to act accordingly. Skeptical? Well, so was I. I
mean, how does a robot know the difference between right and wrong when
philosophers have been making a living debating that very idea for centuries?
Turns out, in this case it’s more of a holistic
approach to situation and crisis. Noticing how humans tend to make
mistakes in predictable ways, for example, can set a standard, or
watching how their eyes move when they retrieve memories.
By developing robots that further understand how
people think – including errors – they can leverage these models as tools for
robots to use as they encounter humans in the world.
For example, the scientists put a robot to work on a
serious project: playing hide-and-seek. Given the fact that the ATC-R is
designed to learn and understand, the robot was able to grasp the concept of
the game fairly quickly. The model was in fact able
to mimic the outward behavior of the person, perfectly matching the
hiding behavior.
That sounds small, but it’s really a big, big
deal. The robot was also able to play a credible game of hide and
seek against a human. Think about that.
Don’t believe me? See for yourself:
Just when you thought you’ve seen it all,
eh? It’s like watching the early stages of robot evolution take
place.
This architecture is designed with a Theory of the
Mind (ToM) concept. That is, the ability to understand beliefs, desires,
and intentions of others. So why give the robots
this empathetic concept? ToM is used to improve the
robot’s ability to interact with people. This is pertinent
because research in psychology has shown that without ToM, people can be
severely impaired in their abilities to interact naturally with others.
Apparently, the same goes for robots.
Simply put, robots are a little freaky when they’re
disregarding of these things.
So why all of this, you wonder? Why give robots
the ability to think like humans, consider their intentions, and learn to play
well with us? Well, why else do you train? For the mission.
These robots are being designed to be good teammates to people. To help
them. To perform missions. Just like us, they are given a task –
like fighting fires for example – and they need to be the best equipped to
complete that task to the best of their ability.
In this case, learning how to help humans means having
a better robo-understanding of them. The best part? This is only
the beginning. The road to good, embodied cognitive models has been and
continues to be long, but the scientists at NRL say it’s going to be well-worth
the effort.
I guess you know how the old saying goes…
To err is human. To learn how to err is
robot.
Jessica L. Tozer is a blogger for DoDLive and Armed
With Science. She is an Army veteran and an avid science fiction fan,
both of which contribute to her enthusiasm for technology in the military.
Special thanks to the Naval Research Laboratory
for providing the information and general awesomeness factor needed for this
story.

No comments:
Post a Comment