Knives, Robots and the Pentagon

Telepresence and Following Yourself

The Pentagon
(Touch Of Light, CC BY-SA 4.0 via Wikimedia Commons)

So there I was at the initial outdoor security check at the Pentagon.

They asked each person to open any laptop computers and press a button. And show them any mobile phones and press a button on those as well. Apparently this was in case you were going to trigger a bomb. It’s an interesting tactic since if one of those devices really was a detonator, now the bomb will go off…just at an earlier time than was planned. Perhaps it’s more a psychological test since somebody malicious would look nervous and/or try running away once they saw the procedure?

A manager and I—who were from a robot company called, appropriately, iRobot—continued to another security check that was like your standard airport scanner. We had one or two radio controlled military robots in big Otterbox cases that could be rolled around like luggage.

And I had a couple tools with me just in case, since these were prototypes and some of the wiring was done by an intern and wiring was not his strong suit. Later I did in fact have to fix a wire after we left the Pentagon to go to nearby Washington D.C. (the Pentagon is not actually in D.C. despite what some movies tell you); I did this hackish repair in a large posh government office—kind of surreal. In case you’re wondering, the offices in the Pentagon, at least that I saw, were not large or posh at all even for generals.

The main tool I had on hand was a Leatherman. If you’re not familiar with a Leatherman, it’s kind of like a Swiss Army Knife but based around pliers. I quietly put it in the plastic bin and pushed it forward. The Pentagon security guy was like excuse me?

He picked it up, and then slowly proceeded to unfold every knife and screw driver and other tool in the Leatherman. He called over another security guy and said “look at this.”

Leatherman knife

I calmly explained I needed that for maintaining the robots. Surprisingly, they seemed to be satisfied with this explanation and so I was allowed into the bowels of the Pentagon with my trusty Leatherman knife.

At some point I was also checked on a computer and they gave me badge. I was probably supposed to give it back when I left but I kept it as a memento.

Pentagon badge

Luckily I checked out ok. I say “luckily” because if they were checking for Secret clearance—which I did have—it could have caused delays because the government messed up my clearance status for several months. My clearance had been perfectly fine since 2003 until this glitch which resulted in some weird situations.

When I went to visit Boeing in Huntington Beach, they could not verify my clearance. They still let me in but I literally had to carry a little lantern that flashed a red light. I tried to ditch it a few times but this Boeing guy with an Alabama accent thicker than molasses wouldn’t let me. Later we got it resolved—somehow the government (some random data entry person?) had added a woman’s info into my file and it messed up everything. I know this because eventually iRobot hired a security officer and he told me. Before that I had tried calling various government agencies to figure out what was wrong with my clearance and they just kept sending me in circles. The security officer had much better luck and awkwardly asked me one day if I had an alternate identity as a female. No, I said, I don’t, and the security guy was off on the case and got it resolved quickly.

Just for fun, here’s a photo of myself at the Pentagon. We weren’t supposed to be in that room, we just snuck in to take a photo.

Pentagon Briefing Room

Tele-stuff

Before I continue the story, I want to talk about two words: teleoperated and telepresence.

Telepresence is a medium in which transducers, such as video cameras and microphones, substitute for the corresponding senses of the participant. The participant is able to remotely see and hear from the first person POV with the aid of sensing devices in a remote location.1Sherman, W. & Craig, A., Understanding Virtual Reality: Interface, Application, and Design, Morgan Kaufmann, 2018.

With teleoperated robots it is relatively easy to experience “telepresence.” By “teleoperated” I mean some kind of mobile machine with at least a camera and a remote user interface. For instance: video drone quadcopters.

Or another example: you could attach a wireless camera on a radio controlled truck and watch the video feed as you drive it instead of looking directly at the truck itself. Telepresence makes you feel like you are viewing the world from the point of view of the remote-controlled machine.

For at least two decades, every couple of years a company comes out with a telepresence specific robot, which is often some kind of stick on wheels that’s basically a somewhat-mobile Zoom station.

There’s a great scene in the James Bond movie Tomorrow Never Dies where he wirelessly controls a BMW—nothing amazing especially for a Bond film, but the twist is that he’s in the car in the backseat while he does this. I found it to be realistic in that he is totally focused on the telepresence via his cell phone to remotely drive the car, with only a few brief local interruptions. It’s also interesting that the local and remote physical spaces intersected, but he was still telepresenced to the car’s point of view.

Attention

SUGV robot prototype
(Public Domain)

For the most part, humans cannot consciously handle more than one task simultaneously—but they can quickly switch between tasks2S. Kenyon (2021). “Multitasking and Cognitive Architecture,” MetaDevo blog. https://metadevo.com/multitasking/. Humans can also execute a learned “script” in the background while focusing on a task—for instance driving (the script) while texting (the focus).

Unfortunately, the script cannot handle unexpected problems like a large ladder falling off of a van in front of you in the highway, which happened to me once. To this day I always stay clear of vans with ladders on the roads.

In the military, historically, one or more people would be dedicated to operating a single robot. The robot operator would be in a control station, a Hummer, or have a suitcase-style control system set up near a Hummer with somebody guarding them. You can’t operate the robot and effectively observe your own situation at the same time. If somebody shoots at you, it might be too late to task switch.

Also, people under stress usually can’t handle as much cognitive load.

But what if you want to operate a military robot while being dismounted (not in a Hummer) and be mobile (walking/running around)?

Well…my human-robot interface for the Small Unmanned Ground Vehicle (SUGV) enabled that. (SUGV was the type of robot we brought to the Pentagon and D.C.) The human constraints are still there, of course, so the user will never have complete awareness of immediate surroundings simultaneously as operating the robot—but the user can switch between those situations almost instantly.

I’m not getting into the details here but basically the interface was a wearable system composed of a game-inspired HUD-like GUI with full screen video shown on a head-mounted display and a game-style hand controller (or sometimes an actual XBox 360 controller).

There is a fascinating usage in which you can see yourself from the point of view of a robot. I realized this when I started following myself with one of the SUGV robots.

An Army warfighter testing out the wearable robot control system I designed.

Following Myself

One effective method I noticed while operating the robot at the Pentagon was to follow myself. This allowed me to be in telepresence and still walk relatively safely and quickly. Since I could see myself from the point of view of the robot, I would see any obvious dangers near my body. It was quite easy to get into this out-of-body mode of monitoring myself.

Unfortunately, this usage is not appropriate for many scenarios. Often times you want the robot to be ahead of you, hopefully keeping you out of peril. In many cases neither you or the robot will be in line-of-sight with each other.

As interaction design and autonomy improve for robots, they will more often than not autonomously follow their leaders, so a human will not have to manually drive them. However, keeping yourself in the view of cameras (or other sensors) could still be useful—you might be cognitively loaded with other tasks such as controlling arms attached to the robot, high level planning of robots, viewing information, etc., while being mobile yourself.

This is just one of many strange new interaction territories brought about by mobile robots. Intelligent software and new interfaces will make some of the interactions easier/better, but they will be constrained by human factors.

These next couple photos were taken during the Pentagon trip in which we went to Congressional and Senate budget hearings in D.C. and I demonstrated the SUGV prototype robot in front of congresspeople, senators, several Army generals and on C-SPAN.

With robot at Senate Budget Hearing

I probably wasn’t supposed to be touching the gavel before everybody arrived but it seemed like a funny thing to do for a photo.

Holding the gavel before a Senate Budget Hearing

Afterwards the Chief of Staff of the Army and the Under Secretary of the Army did special handshakes with me where they transferred military coins to me as a sign of thanks.

Army coins

AI for Following

I find it disappointing we don’t have a lot of following-a-human AI in deployed products yet. I thought the tech would be robust enough by now since it’s roughly been there for at least a decade. Perhaps part of the problem is simply not enough long-lasting lines of mobile robot products outside of vacuum cleaners and industrial systems.

Maybe Amazon Astro will finally be the one to survive more than a couple years? We’ll see. Amazon recently bought the aforementioned iRobot as well, so hopefully they’ll let iRobot keep using their tech and expand it to more products. I say “hopefully” knowing full well that when a big company buys a small one, it often dissolves into nothing after a couple years. Yet iRobot’s spirit is strong, so we’ll see what happens. iRobot’s former military tech continues on as part of Teledyne FLIR.

Amazon Astro
(courtesy of Amazon)

Not far from iRobot’s headquarters in Bedford Massachussetts is Piaggio Fast Forward in Boston, the skunk works of Piaggio, the famous Italian maker of the Vespa scooter. Piaggio Fast Forward has made perhaps the most distilled form of the following-a-human behavior: autonomous luggage.

(Courtesy of Piaggio Fast Forward https://www.piaggiofastforward.com)/presskit)
(Courtesy of Piaggio Fast Forward https://www.piaggiofastforward.com/presskit)

First announced about five years ago, they now have two products, the Gita and the Gita Mini, which you can buy now (at least in the United States). They are essentially containers that can follow their owner around at up to 6 MPH. They also have temporary autonomous behaviors: apparently the only one released so far being the ability to “decouple” from the user and go through a doorway, and presumably then recouple to the user.

I, for one, welcome our robot followers.


  • 1
    Sherman, W. & Craig, A., Understanding Virtual Reality: Interface, Application, and Design, Morgan Kaufmann, 2018.
  • 2
    S. Kenyon (2021). “Multitasking and Cognitive Architecture,” MetaDevo blog. https://metadevo.com/multitasking/
Ad - Web Hosting from SiteGround - Crafted for easy site management. Click to learn more.