Assistant Professor, Penn State University
Department of Human Development and Family Studies

Older research projects

Pennie: Educational and Affective Co-robotics

Co-robots are robots that work side-by-side with humans, assisting them and adapting to their needs rather than operating as isolated entities. I'm collaborating with Dr. Conrad Tucker to developing Pennie, an affective educational co-robotic system.

Emotional states, such as frustration, and engagement play a constant part in our performance of everyday tasks, and a vital part of the learning process. A student in an engineering class might be doing well, but might feel so uncertain about their work or so overwhelmed by all of it that they move out of STEM into a different field. Pennie is designed to understand and adapt to the emotional state of the humans it interacts with. For example, Pennie might watch a student do a shoddy job on a task because they were just bored with it, and might recommend a more challenging problem to approach next. Or she might see someone perform a task well, but be very anxious about it, and might recommend a second task at about the same level of difficulty to let the student get comfortable with their work before moving up.

Parallax and Videoconference

One of the problems with videoconference is that it feels like the person is far away. And part of the reason the person feels far away is because they don't do things like make and break eye contact right. Also, when you move around in the environment, everything else around you shifts (for example, if you lean to the left, you'll see more of that side of your monitor). But the image on the screen doesn't. I'm working on a project to help fix both those things, to see what I can do to help make videoconference seem more natural.

The paper is also available. For this experiment, I'm collaborating with Dr. Steven Boker (my advisor) and Jeffrey R. Spies. The system is patented, thanks to UVa Innovation, but you should be able to get a free research license.

Separation of Speech and Affect

Have you ever come across a photo of yourself where the expression on your face is something completely awkward and strange--like you couldn't possibly have ever made that expression yourself? I have. What's happening is that there are speech movements and expression movements that are mapped over top of each other. I think that in conversation or in watching video, we filter the faster speech movements out of the slower emotional expressions, so we never see the combination, really. But in a still frame, you can't see the speeds of things, so you can't separate them. My dissertation is focused on a technique for separating emotional expression and speech movements from facial movement data. I'm not interested in cleaning up pictures, of course. I'm interested in being able to analyze (and classify) emotion separately in conversational video. But it might help with the picture thing, too.

Synchronization in Dance

To test out some of the techniques we're using for the conversation studies, the lab first looked at Dance. Dance is like conversation, but much more predictable, since the semantic structure is easier, and the rhythm's more predictable. I've done some work on the influences of ambiguity on the dynamics of dance.

Incremental Natural Language Processor for ND's Rudy the Robot

Knock knock.
Who's there?
The Interrupting Cow.
The Interru-
Moo.

Most modern Natural Language processing engines take a serial, sequential process, in which an utterance is sent through several passes, one after another. First, it's interpreted phonetically and made into words. Then it's parsed syntactically and the various bindings and sentence structures are extracted. Finally, it's understood semantically, and the meaning is made clear.

The problem with this idea is that it means you have to wait until the person's finished their sentence before you can even start to process it, much less understand it. So then how do people manage to tell such wonderful & things as the Interrupting Cow Joke?

I did much of my master's with Dr. Matthias Scheutz (now heading the Indiana University HRI Lab) and the Notre Dame Artificial Intelligence and Robotics Laboratory to build an incremental natural language (especially speech) processing engine. The result was a system called Tide: A Timing Sensitive Incremental Discourse Engine. It became the basis of my Masters' Thesis.

It's done mostly in Java.

TIGNA

One major part of my research as an undergraduate was devoted to TIGNA: The Infamous Gender Neutral Avatar.

TIGNA is a part of ongoing research about nonverbal communication and postural control with Dr. Steven Boker at the University of Notre Dame Laboratory for the Quantitative Investigation of Human Dynamics (now the Human Dynamics Lab at the University of Virginia).

The primary purpose of TIGNA, initially, was to reconstruct people's joint angles and body orientations from the motion-tracking apparatuses in the Bokerlab, so that we could represent people's motion in a subject-independent format, and to provide an easy means of visualization.

In more recent times, retrieving joint angle representations has been moved into a separate utility, TIJAE: The Infamous Joint Angle Extractor, (also mine), and the secondary purpose of TIGNA, visualization, has been made its primary goal.

TIGNA uses data captured from eight points on the human body to construct its Avatars, and uses a variety of algorithms and ratios, gathered primarily from psychology, art, computer simulations, and robotic inverse kinematics. The end result is one or two computer-generated Avatars that match the size and shape of the person being captured, and mimic his or her behavior exactly. TIGNA also features an option to record motion capture data and play it back later from a file.

TIGNA is written in C and OpenGL.

MOVING ROOM CONTROL

A more recent addition to my project status has been the moving room control system. Also a part of postural control studies, the moving room was constructed out of oak, steel, and foam. The room is approximately five feet square by seven feet tall, just large enough for a single person to stand comfortably inside.

The control system consists of a single PC running LABVIEW software, an interface box, and the two servo motors that control the room.

The software for the system again interfaces with the motion capture apparatuses in the room, and is designed to be able to capture motion control codes and motion-capture data in real time. While it does this, the software is able to control the motion of the room, either independently of the motion of the individual inside the room, or in response to it.

The software is constructed for quick experiment design, and fitted with a GUI that allows a series of experiments to be run in succession in any of several counterbalanced orders, and to provide the experimenters several key items: the ability to begin each trial on command, a quick display that shows the status of the current trial, any instructions that need to be given to the participant during the trial, and the ubiquitous abort switch should anything go wrong.

The software for the control system is written using LABVIEW software.

TRILOBOT CONTROL SYSTEM

During my one summer of undergraduate research, I worked with Dr. Matthias Scheutz in the AI/Robotics Laboratory at Notre Dame.

Our primary robot at the time was the Trilobot, a small robot with limited sensory abilities.

As part of a class the preceding semester, I worked as a part of several teams, each designing a discrete behavioral module. Each module was built control a discrete behavior, such as exploration and mapping, searching for "food" (represented by objects placed around the experiement area), finding and tending its "children" (represented by autonymous LEGO Mindstorms robots with very simple control systems), and returning to its home to sleep.

Once the behavioral modules were completed (at the end of the semester), my undergraduate research project began. The project lineup was simple: work out a method to interface cleanly between the several modules.

The final project is an affective subsumption architecture; that is, the control system uses a rough estimation of affective (emotional) states to determine which single behavior module will control the motors at any given time.

Several affective states, such as worry (for children), and boredom (when not doing anything), are recorded as values in memory, indicating the strength of that affect. Depending on the relative strengths of the various affects, a module would be chosen to control the motors. Each module would satiate or enhance those affective states depending on the success of the Trilobot at achieving its task, and the goals that it was able to complete.

For example, if the Trilobot were to become worried about its children, the child-finding module would gain control of the motors, and the Trilobot would begin searching for its children. As it searched, its worry would continue to increase. Once in found its children, its worry would decrease, and once it had tended them appropriately (bringing them back to the "home" area of the map), its worry would be almost entirely diminished.

A perseverance factor allowed the Trilobot to finish tasks where it was showing success, so that unless it were, for example, immensely hungry, it would finish tending its children before moving on to another task.

The Trilobot's perseverance would not hold up--frustration with a task that brought repeated failure would cause the Trilobot to change tasks unless extreme levels of an affect kept it to the task.

For example, if the Trilobot were only a little hungry, and had a long and fruitless search for food, it would become frustrated with food-finding, and might instead seek to expand its map, or return home and sleep.

An interesting side-effect of this system was breakdown: when several affective levels were relatively high, with no one extremely outdistancing the others, and the Trilobot was unable to see any progress in satiating any of its needs, the Trilobot could become sufficiently frustrated that it would be unable to settle on a single task, and would instead "break down", that is, it would be unable to work towards any task, because it was too frustrated, and had too many important things to do.

While unintentional, this effect showed some biological and introspective accuracy.

The Trilobot Control System and each of its modules were written in Java.

Contact Info

Timothy R. Brick
Penn State University
231 Health & Human Development
University Park, PA 16802, USA
Office: (+1) (814) 865-4868
email: tbrick at psu dot edu