On Strong AI & Robotics

Learning How to Make Robots in 2001

No, that’s not a typo—I don’t mean 2021, I really mean the year 2001 as in 2001: A Space Odyssey.

This is the story of how I learned to make robots.

I think it will be most interesting to those who have never made physical robots before. But even if you are a roboticist, this adventure might be at least amusing.

In The Lost Worlds of 2001… “The Birth of Hal,” I reveal that the ship’s computer was originally named Socrates…and was conceived of as a fully mobile robot.
—Arthur C. Clarke, HAL’s Legacy

Three main disciplines and/or ways to get into making robots are:

  1. electronics
  2. mechanical
  3. software

Personally, I’ve always been weak in the electronics and mechanical but stronger in the software, and this became very evident in middle school and high school.

I had a lot of fun writing code (mostly BASIC) as a kid. But also for my entire childhood I loved taking apart junked electronic equipment. I would hook batteries directly to electric motors I’d extracted from junk to make them spin.

I also caused some short circuits—even in college when I should have known better I managed to fry a wire inside the walls of my dorm room—and electro-shocked myself numerous times. One of those time wasn’t even from hacking, it was simply holding onto a metal microphone hooked to an old tube amp that went wonky for a minute and shocked me through the handle of the mic.

Anyway, I was better at code.

And even though I had played with motors and stuff I was missing a key thing…

This key was how to hook code to physical objects. Like how to use my computer to make a motor move? This simple little interface is really the one hurdle to get through to get into making robots.

If you were to start searching for academic ways to “learn robotics” those sources would say you have to learn a whole bunch of math and physics and computer science and stuff that’s linked to the history of industrial robots which involves kinematics and so on…but you can ignore all of that.

If you want to make a little robot that runs around and has some kind of computer code running on it—it’s a lot easier.

The easiest way would be to buy an existing robot kit. LEGO had been working on those for a long time but those easy kits didn’t really become a big thing until I was at the end of my college years. And Arduinos didn’t even exist yet.

So I had to do some extra steps.

First, how to achieve that first critical key interface—computer code telling a motor what to do?

Enter the microcontrollers.

What is a Microcontroller?

As you can imagine, if the computer can control a motor, then you can be hooking that motor up to wheels or an arm or whatever and making it interact with the world. Sure you’ll want sensors, so that the robot will know, for example, that it just touched something or it found what it’s looking for, etc., but a great first step is to just get a motor under some kind of control no matter how loose and open ended it may be.

A type of computer that is great for this is called a microcontroller.

A microcontroller is basically a computer on a chip. It contains a processor and some memory and some input/output circuitry. The microcontroller processor is typically a lot less powerful (and cheaper) than a CPU in a desktop or phone.

If you want to get started fast, a microcontroller development board is easiest. It has power circuitry and easy to use physical interfaces so you can hook stuff up to the inputs and outputs. And that is the critical thing we need to make a motor move from code—the outputs.

There’s lots of general purpose microcontroller boards and also things like Raspberry Pis, boards specialized for robots and boards with GPUs (Graphics Processing Units) such as nVidia’s Jetson line.

This is a classic microcontroller—the Motorola 68HC11:

Zooming out, this is the development board (and a little manual that came with it):

Even though the 68HC11 was getting kind of old, they still made them in 2001 and it’s what my professor chose for us to learn assembly language and machine language on. The professor was a wonderful old man from New Hampshire who would say things like he’d just as well ride a horse as drive a car.

Machine language is basically the language of the processor, and it is a series of numbers. Everything that a computer runs is in machine language. All other programming languages are compiled down to machine code (or are interpreted by something else that was compiled down to machine code).

Assembly language is a bare bones one-to-one mapping from machine codes to symbols that are human-readable.

Education-wise, I suspect that learning assembly and machine language for a relatively simple architecture like an 8-bit microcontroller such as the HC11 is much easier / better than learning on a more complicated architecture like Intel x86 (which is for instance what you’ve got in your desktop Mac or PC). I tutored students for both 68HC11 and for x86. I found that either way, assembly language was quite difficult for a lot of students to learn—and even comprehend. But it was far worse with x86.

And, with an embedded platform e.g. the board, the labs can be set up so you actually make lights blink and motors move. That’s way more awesome then just seeing some debug text print out on a console. It also seems to force people into the mindset of understanding what layers they are dealing with in computer architecture.

Making Motors Move

One of the labs the professor had us do involved hooking a stepper motor to the microcontroller board. Stepper motors are special in that they can maintain a position, but in this experiment that didn’t really matter.

Normally you don’t just hook a motor straight into your computer board—you need some extra circuitry or else you might fry the whole thing. But in this case it was a very low power motor and relatively safe.

The code took an input from a function generator to define the frequency which determined the speed of the motor. But we’re just talking about outputs right now, so you can ignore that—I could have had no inputs and still have been able to move the motor.

In summary, once that motor started moving, it changed my entire outlook.

It made me realize that I had bridged the gap between computer code and physical reality—maybe I really could get into robotics…

Making Motors Move with PWM

One of the most popular ways to control motors and other analog devices is with PWM (Pulse Width Modulation).

Here is an example of PWM motor control that I wrote in 68HC11 assembly, in case you’re curious what HC11 assembly code looks like (skip over this for an explanation of PWM):

* Data Segment			
* Constants:
TMSK1	EQU	$1022
TMSK2	EQU	$1024
TFLG1	EQU	$1023
TFLG2	EQU	$1025
TOC2	EQU	$1018	* Timer Output Compare 2 Register Pair (2 bytes)
TCTL1	EQU	$1020	
APORT	EQU	$1000	* output port
EPORT	EQU	$100A	* input port
PACTL	EQU	$1026	* Port A Control
PVTOF	EQU	$00D0	* Pseudo Vector address
PVOC2	EQU	$00DC	* Pseudo Vector address
* Variables:
        ORG     $180
Toggle	FDB	$7FFF
* Code Segment			
	ORG	$B600
INIT	LDS	#$0041
* Pseudo Vectors (3 bytes):
	LDAA	#$7E		* JMP opcode
	STAA	PVTOF		* TOF PV byte 1
	STAA	PVOC2		* OC2 PV byte 1
	STX	PVTOF+1		* TOF PV bytes 2,3
	STX	PVOC2+1		* OC2 PV bytes 2,3
	CLR	TCTL1		* timer disconnected from output
	LDAA	#$40
	STAA	TMSK1		* OC2 enable
	LDAA	#$80
	STAA	TMSK2		* overflow enable
	STAA	PACTL		* PA7 = output	
	CLI			* enable interrupts

* Subroutine: TOF_ISR
* Desc: Timer Overflow ISR
* Size: 23 bytes
* Stack Size: 0 bytes
	STAA	APORT		* high ouput
	LDD	Toggle
	LDAA	EPORT		* input new duty cycle
	STD	Toggle		* ->Toggle_MSB	
	STD	TOC2		* new duty cycle affects how often OC2 interrupts
	LDAA	#$80
	STAA	TFLG2		* reset flag
	RTI			* return from service routine

* Subroutine: OC2_ISR
* Desc: Output Compare 2 ISR
* Size: 9 bytes
* Stack Size: 0 bytes
OC2_ISR	CLR	APORT		* low output
	LDAA	#$40
	STAA	TFLG1		* reset flag
	RTI			* return from service routine

What is PWM?

Imagine flipping a light switch on and off a thousand times per second. But sometimes you leave it on longer or off longer. For example, you have it on 750 times but off only 250 times in that one second. The overall effect is that the light is on at 75% brightness during that second.

PWM is like that. Imagine hooking your switch to a motor. If you switch it with 75% ONs the motor will spin at 75% full speed.

(diagram from Wikipedia)

The other essential concept is that PWM is typically used as an interface between the digital world of the computer and the analog worlds of motors. A computer processor deals with ones and zeroes—ONs and OFFs. Those are outputs on computers—for instance on a microcontroller like the 68HC11 in this article. The ones and zeroes are represented on those outputs with HIGH and LOW voltages. All the other devices connected will use the same standard for what is HIGH and what is LOW, e.g. 5 volts as HIGH and 0.0 volts as LOW.

The problem is how to get those HIGHs and LOWs to control a motor, which just wants voltage. If it’s a tiny motor it can run directly off of the logic voltage (e.g. 5 V), but that’s not generally a good option. In fact, you usually want the motor power lines isolated from the computer lines.

So PWM (remember, switching really fast) is used. The computer outputs a square wave (HIGHs and LOWs) to a switch, which switches the power from another source (e.g. a battery) into the motor. The higher the percentage of ones (HIGHs) coming out of the computer, the faster the motor goes. Obviously if there is something heavy attached to the motor rotor, it won’t actually go faster, but it will try.

The “switch” interface can be as simple as a few transistors, or it could be something like an H-bridge which is usually another kind of chip. Don’t worry about what those terms mean—the idea is you need something to allow the computer to connect to the motor via information (how fast to go) but the actual power for the motor is isolated from the computer. So the computer doesn’t blow up.

Why was this Exciting?

This was really awesome when I learned about it because up until then my programming was not able to interface to motors.

Once you start learning how to get motors spinning from computer code, and also get inputs from sensors into the code, you start realizing that even this low level near-the-metal basic robotics is not as difficult as you might think…

The Mini-Me Robot

This is a robot I threw together in about an hour back in 2003, still in college.

This was made out of an Innovation First educational robot kit which came with the official FIRST robotics kit at the time. This small edu kit later evolved into the VEX robotics kit. They come with the PWM interfaces already built in. With these kits, the electronics and wires are already done for you so you can quickly just plug things together and get it going fast.

FIRST is a high school education program to make competition robots which are largely radio controlled but can have autonomy as well. I had been spending a lot of time in a basement laboratory at Northeastern University to mentor the FIRST team hosted there but also because I wanted to learn more about robotics myself.

This little robot had the same computer as the real competition robot, so it was useful as a programming testbed. Eventually it was dubbed “Mini-Me” after the second Austin Powers movie.

The photo of the Mini-Me robot only shows the original configuration. Later on I turned the infrared sensors on the front downward and I programmed it to be a simple line follower (as in driving itself along a line made of tape on the floor). This program was primarily a FSM (Finite State Machine).

If you just looked up FSM and are thinking this looks like a bunch of math bullshit, don’t worry—in this context it’s basically just a behavior the robot does, and the robot will continue to do that behavior until it’s triggered to do another behavior, and it will keep doing that one until it’s triggered and so on…

This next diagram shows top view path (in red) of the robot in an example of how its behavior around a curve is chunky but still effective.

I also designed a slightly larger program of which the line tracker was one component.

Behavior diagram
Flow chart

Aside from learning more about programming robots, working with that FIRST team was also good lesson in systems—the amount of time for testing and integration is massive. With robots, most people never get to the interesting programming because it takes so long to make anything work at all. These robot kits help though, at least for programmers, because you don’t have to waste as much time reinventing the wheel.

Later on I used some of those Innovation First edu kit parts for my MicroMouse maze solving robot (unfortunately I don’t have any photos of that — too poor to own a camera at the time, and cheap mobile phones didn’t have cameras yet, and the robot itself was not saved.)

Towards Robot Scripting

During 2003 and 2004, I also started making my own robots outside of the FIRST program. Eventually I made my own microcontroller based board which I attached to an RC truck chassis.

The truck chassis was originally a RadioShack Black Phantom II. It has a drive motor and a steering servo motor.

The Board

This board was intended for a spherical robot for my college final project, however, I also used the aforementioned RC truck chassis for testing. It was fairly generic—there wasn’t anything specific to spherical robots in the board design or programming, with the exception of the size and shape of the board which was made to fit in the sphere shell.

This robot used an 8-bit microcontroller (Microchip PIC18LF458) board that I hacked together myself. All of the robot code, including comms (communications), the script engine, sensor interaction, and motor control ran on the microcontroller.

The main components of the board are:

The H-Bridge chip was an SN754410 which can control up to two motors (I was planning to add another H-Bridge chip but never got to it). The oscillator was a CRY 20 MHz crystal. The comms interface was a simple circuit to connect a wireless transceiver to the PIC’s RX and TX (Receive and Transmit) pins using resistors. The voltage regulator was an LM317.

The board is about 10 cm x 10 cm (not including the wireless transceiver, which is the metal rectangular box in the photo). I was powering the system off of two 9V batteries (one for the microcontroller and one for the motors).

The Breadboard

Breadboards allow you to hack on circuits quickly and without soldering.

Before soldering together the board shown above, I did initial testing for this project on a breadboard and used an ICL232 chip (RS-232 transmitter/receiver) to do direct serial comms—which means I could hook it to a desktop or laptop PC and communicate with it for development purposes.

As one typically does, once wired comms works, you then try out wireless…

After that I soldered together the actual board.


You can skip this section unless you want to know some more technical details.

For wireless, I used Cirronet WIT2410 2.4 GHz frequency hopping spread spectrum serial transceiver modules (Cirronet doesn’t exist anymore, I think at some point these devices transferred into the Murata brand). The WIT2410 module is that metal rectangular box in the photos above. The antenna is the little tan rectangular object on the edge of it.

Eventually I had another Cirronet device, the SNAP 2410, running as the base station. The SNAP 2410 was an access point that interfaces Ethernet to wireless serial. The base station is what’s connected to the operator’s computer. I threw together a TCP server for initial testing of comms interfaces: PC <-> SNAP 2410 <-> WIT2410 radio modem <-> robot.


In all of these cases, I started to realize that a lot of things that we wanted these robots to do could be implemented with simple scripts.

Why hard code every new behavior or experiment or whatever? A script would make programming the robot a lot easier. It would be better than the typical hobby/amateur robot programming procedure, which is:

  1. Modify the code (usually hastily).
  2. Recompile.
  3. Fix compilation errors.
  4. Download to the robot computer.
  5. Empirical testing.
  6. Fix whatever broke.

Rinse, repeat. Indeed, repeat when you realize the robot didn’t do what you wanted. Repeat every time your one small code change introduced a major bug. Et cetera ad nauseum.

But if the control code is well-tested and proven to work, a script being interpreted by this code will not break the code. You may still create undesirable robot activity, but it will be a bit more more obvious why something unexpected happened and easier to fix.

Script Engines for Robots

The key idea is that you have a robust hard-coded core (the engine) which can be used in many ways just by changing the input data.

The concept is kind of like video game programming. You have an engine, for instance written in a compiled language like C++, where all the core stuff lives. And then you have scripts which are fast and lightweight and can be changed and tested super-fast that run on the engine.

Ideally a script can be changed to completely redefine what the robot will do without needing to modify the embedded code (embedded means the compiled code running on the robot).


In 2004 I hacked together a simple scripting framework for mobile robots (called SIRCS [Scripted Intelligent Real-time Control System]). It didn’t remove the necessity to develop basic low-level behaviors, but was meant to take advantage of those.

For instance, a typical low level robot behavior might be to drive. And then you will want to be able to turn, and to drive / turn for a certain distance, and so on. The scripting framework doesn’t specify feedback loops or how anything gets done. And there was the possibility that even if any given low level behavior was not great, a script composed of many badly working behaviors may still be good enough.

The script engine is basically a couple of state machines, and of course the code to achieve those states which is specific to the robot.

My design also had meta-states:

Interfacing the Framework to Higher Levels

I envisioned interfacing this framework with higher level abilities such as missions and a high-level arbitrator to choose when multiple internal commands were attempted simultaneously.

I came up with this design of a semi-autonomous framework. The diagram is arranged so that the topmost boxes are remote (at the user) and as you go downwards, the boxes become more robot embedded. The Arbitrator is a module to decide how to deal with conflicting commands, and could be remote or embedded.

Human-Robot Interaction

Scripting also enables certain graphical interfaces for easy robot autonomy configuration. For instance, my original interface for this was a path editor in which a user just drew a path overlaid on a map image—and this was supposed to automatically create the correct robot control script. I made this prototype GUI for that using VisualBasic:

Later I made an application that moved the map concept to a tab, and included other tabs for the programmer (me) to make new scripts or control the robot in Immediate mode. This was also my first—and last—GUI made using GTK.

Immediate mode means basically manual real time control of basic functions like movement (like an RC vehicle). Clicking GUI buttons to control a robot is a horrible method—this was mostly intended to become a recording scripts (aka macros) type of thing. The cam tab, if it worked, would have shown video from the robot’s camera (video was transmitted through its own separate analog radio link and had to be re-digitized with a video input box on the PC—not a recommended method).


As a poor undergraduate, one might wonder how I was acquiring hardware for making robots. And how did I afford travel to various conferences?

Well, by being crafty that’s how.

For the last robot, the PIC microcontrollers were free samples from Microchip. I convinced a company called Cirronet to donate the wireless equipment.  A lot of miscellaneous small components like resistors and capacitors came from a Northeastern University (NU) lab that gave out small amounts of those kinds of components to engineering students. The truck chassis was donated by my fellow NU student Rob Borgeson.

And I got $1000 by applying for a little-known NU Undergraduate research grant. I bucketed some of that for my senior project team—which was making spherical robots—for mechanical parts and used a large chunk of it to go to an SPIE conference in Orlando where I presented a paper.

And that was not the only conference I did: for the first IEEE student conference I went to, it was right in Boston where I lived…but for the next one held in Long Island, I basically stowed away with a Boston University team and didn’t pay for any of the travel/lodging. And later I also got a tiny amount of funds from NU via Textron Systems to attend a local robotics conference in Boston.