Timothy R. Brick, PhD
Assistant Professor, Penn State University
Department of Human Development and Family Studies
About Me
My lab is focused on novel ways to collect and model data in order to understand and intervene into the dynamical systems that underlie our day-to-day lives. We use passive data collection methods like wearables and computer vision along with real-time modeling and intervention to understand and manipulate the way that people interact with their environment and with each other. We have a special focus on dyadic situations, like conversations with other people, or interactions with agents like educational co-robots, and on affect/emotion and the ways that it can lead to better learning or smoother interaction, and on the broader changes in measures like stress, cravings, and emotion that occur outside the lab in day-to-day life, and especially in cases like addiction, autism, or PTSD.
The eventual goal here is simple: use data and knowledge to help people to thrive.
Specifically, I'm interested in:
Affiliations and Acronyms
Research Projects
Wearables and Real-time Modeling
Wearables seem to be the new hotness. Lots of people wear these little wristbands that tell them all kinds of things about themselves. But there's more to it than that. Wearables give us a window into our everyday physiology. They tell us something about the way we move, the way we respond to things, the way we react to stimuli in our environment. Sensors on a wearable can get us everything from heart rate to movement to body temperature, and along the way tell us something about things like our stress level, sleep quality, and emotional responsiveness.
The Wear-IT project, in collaboration with Dr. Zita Oravecz's IMPEC lab is about taking that to the next level. We are social creatures, and a lot of the way our physiology responds has a lot to do with the other folks in the room. So we're looking to find out if playing with your kids really does relax you, or if your heart really does skip a beat when that special someone walks into the room, and who it is in your life that helps you to calm down and focus. More than that, we're looking to intervene for people who have problems with stress to help find times when stress episodes are starting up, and provide helpful interventions in the actual moment.
Data, Analysis, and Privacy
With a background in computing and a framework dedicated to learning about your life and activities, I'm very interested in data and I'm a bit paranoid about privacy. As a scientist, though, I ask people to trust me with data about them all the time. It's a lot of responsibility. With all this data from wearables and facial expression tracking, we can learn a whole lot about an individual.
Modern scientific practice requires than we share data with other scientists. But privacy concerns mean we need to keep this invasive data to ourselves, and not share it around. Current analytic practice means that the data are collected in one place, and we deal with the privacy concerns by not letting the data leave that place. But that's bad for science. Pitting privacy and scientific concerns against each other might be the wrong way to go about this.
The MIDDLE project is a proposal that hopes to stop making science and privacy opposed to each other. What if instead of collecting data, we instead left our measurements in the care of the person we measured? Then they'd have access to their own data at all times. The person who owned the data would be the person described in the data. So now your privacy is up to you. All we need is a privacy-preserving analysis method, and we'd be good to go. MID/DLE is the first step towards that method.
The privacy-analytic segment of the MIDDLE project is being carried on in collaboration with Joshua Snoke (now at RAND corporation) and Dr. Aleksandra B. Slavković.
Data Mining, Simulation, and Statistical methods for behavioral science data
In order to analyze a lot of the data we get from things like facial expression and wearables, we need to be able to turn the stream of data we get into understandable information. This requires a wide range of knowhow from the computer science literature, like computer vision models and data mining techniques. It also needs a good deal of substantive expertise from the human behavior side of things. And a lot of timeseries modeling techniques from engineering control theory and the related fields.
I use models like Hidden Markov Models (HMMs), sequence learning techniques, and dynamical systems models to understand the processes at work, and we use simulation techniques to understand the limits and extremes of the models themselves. For example, Dr. Nilam Ram and I are building up an agent-based model of day-to-day emotion. My student Allison Hepworth and I have use association rules to understand the way that mothers use social media (e.g. Twitter, Facebook, Pinterest, etc.) to understand what and how to feed their infant children.
OpenMx: Free Statistical Software for Structural Equation Modeling
OpenMx is intended to be the statistical development platform for the next twenty years or more. It's designed to do a lot of the things that Structural Equation Modelers like to do. More than that, it's intended to be easy for upcoming researchers to use to develop and implement new methods. I've used it to create tools to improve model understanding, enable many-multilevel SEM, and expand regularization approaches in SEM.
I'm one of the primary developers of OpenMx, and was the primary developer of its original computational kernel. Along with Steven Boker, Michael Neale, Joshua Pritikin, Rob Kirkpatrick, Michael Hunter, Ryne Estabrook,< a href="https://www.ed.ac.uk/profile/timothy-bates">Tim Bates, and Hermine Maes, I'm still a member of the core development team.
Facial expression and Rapport
I'm interested in the way that people work together in conversation. We do this all the time; people lead and follow each other. One person will smile or nod or frown or scowl, and the other person will respond to that. I'm interested in learning how that works, why it works, and (importantly) what's going wrong when it doesn't work.
To that end, I'm involved in several research projects where participants are asked to participate in an unstructured videoconference conversation with someone they don't know. From there, we use image-processing technologies to measure the amount of synchronization and symmetry involved in the conversation. # NOTE: This project is mostly on hold right now due to construction delays in building up the lab to do it; also now because of COVID-19. I'm hoping to kick it back in as soon as construction completes; hopefully we'll be able to make the jump to VR at that point. -tb
The Rapport Project is an extension of this work to examine how interactions work when they work well. We're using a variety of contexts, such as conversation studies, common-task studies, studies of robotics, and rating studies to determine what this thing we like to call ``Rapport'' means in real life. Some conversations go well. They just seem to flow right, and everybody seems to understand each other. Other conversations seem awkward, stilted, and strange, and seem to have more misunderstandings. The Rapport Project is about learning why.NOTE: This project is mostly on hold right now due to construction delays in building up the lab to do it; also now because of COVID-19. I'm hoping to kick it back in as soon as construction completes; hopefully we'll be able to make the jump to VR at that point. -tb
One of the fun parts about this whole project is that we have the technology to modify the video stream in real time. We can change things like the apparent sex and apparent identity of the other participant. And the conversation can go on, with neither person realizing the modification is happening.
I've also been studying the structure of emotional labels for facial expression. In an ongoing study with Angela Staples now at EMU and Steven Boker at UVa, we're working to figure out what that structure is. It really looks like facial expression can't be easily understood without context. For example, we already showed in one paper that dynamics can help to identify facial actions. Now it looks like facial expression in conversation includes a lot of additional information that's about the person's internal state, but very much also about the dyadic context.