Timothy R. Brick, PhD
Assistant Professor, Penn State University
Department of Human Development and Family Studies
My lab is focused on modeling and intervening into the dynamical systems that underlie our day-to-day lives. We use passive data collection methods like wearables and computer vision along with real-time modeling and intervention to understand and manipulate the way that people interact with their environment and with each other. We have a special focus on dyadic situations, like conversations with other people, or interactions with agents like educational co-robots, and on affect/emotion and the ways that it can lead to better learning or smoother interaction, and on the broader changes in measures like stress, cravings, and emotion that occur outside the lab in day-to-day life, and especially in cases like addiction, autism, or PTSD. Specifically, I'm interested in:
Affiliations and Acronyms
Facial expression and Rapport
I'm interested in the way that people work together in conversation. We do this all the time; people lead and follow each other. One person will smile or nod or frown or scowl, and the other person will respond to that. I'm interested in learning how that works, why it works, and (importantly) what's going wrong when it doesn't work.
To that end, I'm involved in several research projects where participants are asked to participate in an unstructured videoconference conversation with someone they don't know. From there, we use image-processing technologies to measure the amount of synchronization and symmetry involved in the conversation.
The Rapport Project is an extension of this work to examine how interactions work when they work well. We're using a variety of contexts, such as conversation studies, common-task studies, studies of robotics, and rating studies to determine what this thing we like to call ``Rapport'' means in real life. Some conversations go well. They just seem to flow right, and everybody seems to understand each other. Other conversations seem awkward, stilted, and strange, and seem to have more misunderstandings. The Rapport Project is about learning why. My primary collaborator on this project is student Allison Gray.
One of the fun parts about this whole project is that we have the technology to modify the video stream in real time. We can change things like the apparent sex and apparent identity of the other participant. And the conversation can go on, with neither person realizing the modification is happening.
I've also been studying the structure of emotional labels for facial expression. In an ongoing study with Angela Staples now at IU and Steven Boker at UVa, we're working to figure out what that structure is. It really looks like facial expression can't be easily understood without context. For example, we already showed in one paper that dynamics can help to identify facial actions. Now it looks like facial expression in conversation includes a lot of additional information that's about the person's internal state, but very much also about the dyadic context.
Wearables and Real-time intervention
Wearables seem to be the new hotness. Lots of people wear these little wristbands that tell them all kinds of things about themselves. But there's more to it than that. Wearables give us a window into our everyday physiology. They tell us something about the way we move, the way we respond to things, the way we react to stimuli in our environment. Sensors on a wearable can get us everything from heart rate to movement to body temperature, and along the way tell us something about things like our stress level, sleep quality, and emotional responsiveness.
The WEAR-IT project, in collaboration with Dr. Zita Oravecz, James Mundie, Dr. Saida Heshmati, Allison Gray, and Dr. Joshua Smyth is about taking that to the next level. We are social creatures, and a lot of the way our physiology responds has a lot to do with the other folks in the room. So we're looking to find out if playing with your kids really does relax you, or if your heart really does skip a beat when that special someone walks into the room, and who it is in your life that helps you to calm down and focus. More than that, we're looking to intervene for people who have problems with stress to help find times when stress episodes are starting up, and provide helpful interventions in the actual moment.
Data, Analysis, and Privacy
With a background in computing, I'm very interested in data and I'm a bit paranoid about privacy. As a scientist, though, I ask people to trust me with data about them all the time. It's a lot of responsibility. With all this data from wearables and facial expression tracking, we can learn a whole lot about an individual.
Modern scientific practice requires than we share data with other scientists. But privacy concerns mean we need to keep this invasive data to ourselves, and not share it around. Current analytic practice means that the data are collected in one place, and we deal with the privacy concerns by not letting the data leave that place. But that's bad for science. Pitting privacy and scientific concerns against each other might be the wrong way to go about this.
The MID/DLE project is a proposal that hopes to stop making science and privacy opposed to each other. What if instead of collecting data, we instead left our measurements in the care of the person we measured? Then they'd have access to their own data at all times. The person who owned the data would be the person described in the data. So now your privacy is up to you. All we need is a privacy-preserving analysis method, and we'd be good to go. MID/DLE is the first step towards that method.
Data Mining, Simulation, and Statistical methods for behavioral science data
In order to analyze a lot of the data we get from things like facial expression and wearables, we need to be able to turn the stream of data we get into understandable information. This requires a wide range of knowhow from the computer science literature, like computer vision models and data mining techniques. It also needs a good deal of substantive expertise from the human behavior side of things. And a lot of timeseries modeling techniques from engineering control theory and the related fields.
I use models like Hidden Markov Models (HMMs), sequence learning techniques, and dynamical systems models to understand the processes at work, and we use simulation techniques to understand the limits and extremes of the models themselves. For example, Dr. Nilam Ram and I are building up an agent-based model of day-to-day emotion. My student Allison Hepworth and I have use association rules to understand the way that mothers use social media (e.g. Twitter, Facebook, Pinterest, etc.) to understand what and how to feed their infant children.
Pennie: Educational and Affective Co-robotics
Co-robots are robots that work side-by-side with humans, assisting them and adapting to their needs rather than operating as isolated entities. I'm collaborating with Dr. Conrad Tucker to developing Pennie, an affective educational co-robotic system.
Emotional states, such as frustration, and engagement play a constant part in our performance of everyday tasks, and a vital part of the learning process. A student in an engineering class might be doing well, but might feel so uncertain about their work or so overwhelmed by all of it that they move out of STEM into a different field. Pennie is designed to understand and adapt to the emotional state of the humans it interacts with. For example, Pennie might watch a student do a shoddy job on a task because they were just bored with it, and might recommend a more challenging problem to approach next. Or she might see someone perform a task well, but be very anxious about it, and might recommend a second task at about the same level of difficulty to let the student get comfortable with their work before moving up.
OpenMx: Free Statistical Software for Structural Equation Modeling
OpenMx is intended to be the statistical development platform for the next twenty years or more. It's designed to do a lot of the things that Structural Equation Modelers like to do. More than that, it's intended to be easy for upcoming researchers to use to develop and implement new methods.