Grad Students/Postdocs

Henny Admoni

PhD Candidate

Social Robotics Lab

Yale Department of Computer Science

My research seeks to improve human-robot interaction by computationally modeling the complex, dynamic nonverbal behaviors that occur during collaborative interactions. For example, when collaborating with a partner, it is critical to be able to recognize the location of that partner’s attention as well as to direct the partner’s attention to objects of interest. People’s attention can be influenced by a variety of factors, including bottom-up visual saliency features such as color and orientation, top-down contextual features such as a verbal utterance about the task at hand, and nonverbal cues such as gaze and pointing. My research investigates the influence of these features in human-human collaborations by deriving computational models based on empirical data of dyadic interactions. These models can both interpret human nonverbal behaviors and generate robot behaviors that are effective, natural and communicative. My work unifies attention research from psychology with machine learning techniques form computer science to develop embodied, communicative social robots for collaborative tasks.

Laura Boccanfuso, Ph.D.

Postdoctoral Associate

Technology and Innovation Laboratory (TIL)

Yale Child Study Center

I am interested in employing socially assistive robots to improve therapeutic outcomes for children with autism spectrum disorder (ASD) and exploring novel uses for sensors and technologies that advance the utility, safety and acceptance of robots working in close proximity to humans.  I think developing robots that are useful and sufficiently capable to deploy alongside humans is both technically challenging and highly rewarding.  I’m currently working on a novel, robot-assisted approach for measuring differences in emotion response and emotion recognition in very young children with ASD. By using non-humanoid robots to enact simple social scenarios, we eliminate the typical but complex social cues often used to teach the recognition of affect such as facial expression, body posture and speech.  Instead, we focus on the natural cause-and-effect construct of human interactions to emphasize the importance of less overt context cues in the proper interpretation of an individual’s affective state.

Caitlyn Clabaugh               PhD Candidate             Interaction Lab           University of Southern California

I am a Computer Science PhD student at the University of Southern California (USC), co-advised by Prof. Maja J Matarić and Prof. Fei Sha. My research focuses on the application of machine learning to enable long-term goal adaptation and personalization in Socially Assistive Robotics. Specifically, I am interested in differentiated STEM education through the personalization of SAR tutors’ teaching styles over multiple, repeated interactions. Since receiving my B.A. in Computer Science from Bryn Mawr College in May 2013, I have designed, implemented, and piloted a SAR system to teach preschoolers Number Concepts. In addition to my ongoing research, I remain deeply involved in STEM outreach. I have participated in various K-12 assemblies on Socially Assistive Robotics and invited talks on women in robotics at the national nonprofit Girls Who Code. I also co-organized and lead a workshop on Expressive Robotics with an international audience at the Global Conference on Educational Robotics.

Goren Gordon

Postdoctoral Fellow

Personal Robots Group

MIT Media Lab

I am a postdoc at the MIT Media Lab in the Personal Robots Group with Dr. Cynthia Breazeal. My main research area is curiosity. In my (second) PhD in computational neuroscience, I have developed a mathematical model of curiosity, analyzed humans’ and rodents’ behavior with respect to that model and implemented it in a very simple robot. I am currently conducting studies to assess and promote children’s curiosity by having them interact and play with social robots that behave like curious children. I am also developing information-theoretic algorithms to optimally assess children’s reading skills using the same curiosity and active-learning concepts, in order to personalize the interaction and curriculum to each specific child. My future research goals are to integrate the curiosity models I have developed into the behavior of social robots, such that they will be autonomous, engaging and developing with the child.

Michal (Michelle) Gordon

Postdoctoral Associate

Personal Robots Group

MIT Media Lab

I am a postdoc at the Media Lab in MIT, in the Personal Robots Group working with Dr. Cynthia Breazeal. I work in the area of computer programming languages and natural interfaces to programming. During my PhD I developed a natural language interface for scenario-based programming. My current research focuses on natural interface for programming social robots for preschool children. I am developing a tangible rule-based interface for teaching programming concepts to young children in the context of social interaction. My expertise also includes programming visualization and computer vision. My future research will extend the tangible programming interface to include concepts of agency and to connect it with various autonomous sensing capabilities, such as semantic face and gesture recognition.

Jillian Greczek

PhD Candidate

Interaction Lab           University of Southern California

I am a 4th-year PhD student in the USC Interaction Lab. My thesis work is in creating a Self-Management Coach to promote patient autonomy and provide robot-mediated interventions to help individuals with chronic conditions incorporate new health behaviors into their everyday lives.  My work involves the modeling, implementation, and testing of curricular and emotional feedback using a physically embodied robot with real-world populations.

Corina Grigore

PhD Candidate

Social Robotics Lab

Yale Department of Computer Science

I am interested in developing adaptive human-robot interaction systems which interact with users over long periods of time in order to keep them engaged in high levels of physical activity. The key question I am researching within this context is how can the system create and maintain a user model of the person based on physical activity data obtained from wearable sensors (such as wristband devices) that allows for the interpretation of why and how the user succeeded or failed at achieving a physical activity goal. In particular, my research concerns interpreting this data in the context of how much physical activity early teenagers engage in daily in order to keep them on track for recommended levels of physical activity.

Brad Hayes

PhD Candidate

Social Robotics Lab

Yale Department of Computer Science

I am a PhD candidate in the Department of Computer Science at Yale University, where I am a member of the Social Robotics Laboratory advised by Professor Brian Scassellati. My dissertation research focuses on building autonomous robot collaborators that are capable of safe, adaptive, natural teamwork with humans. This work combines a variety of fields within Human-Robot Interaction, including learning from demonstration, multi-agent coordination, planning under uncertainty, and task and motion planning. This work includes developing methods of autonomously generating hierarchical task representations and learning abstractions, as well as building algorithms to allow an agent to learn how to support teammates’ activities, reducing their cognitive load during life task execution.

Jacqueline Kory

PhD Candidate

Personal Robots Group

MIT Media Lab

I am a Ph.D student in Cynthia Breazeal’s Personal Robots Group at the MIT Media Lab. My research focuses on educational technologies: how robots, tablet games, and other tech can help engage and support children in learning, reading, talking, and emoting. I ask questions about how and why people interact with robots as social others, how context affects interactions, and what implications these things may have for the use of technology in education or in society more generally. Motivating my work are broader questions about the nature of personal identity, how we understand and perceive others, and the importance of empathy. I was awarded an NSF Graduate Research Fellowship to support my research at the Media Lab. I hold a BA in cognitive science from Vassar College.

Jin Joo Lee

PhD Candidate

Personal Robots Group

MIT Media Lab

I am a graduate researcher at the MIT Media Lab in the Personal Robot’s Group with Dr. Cynthia Breazeal. My research interests include human-robot interaction, machine learning, and nonverbal behavior especially modeling nonverbal communication using machine learning algorithms for socially intelligent robots.  My research goal is to develop interactive robots capable of engaging in meaningful nonverbal communication, which has been argued to be at the core of social intelligence.  By combining theories from human social psychology and methods from machine learning, my dissertation work investigates how to computational model nonverbal inferences as a process of belief attribution and nonverbal production as a process of belief shaping.  

Nik Martelaro

Graduate Student

Stanford University

I am a PhD student at Stanford’s Center for Design Research. My research focuses on interaction design for human-machine systems. I am interested in how robots can assist students learning hands on engineering topics such as circuit building. Through emotion identification and regulation, this work aims to help students manage their emotion during learning activities to improve student experience and learning outcomes. In addition, through designing meaningful interactions between students and robots I hope to promote student interest and creativity around engineering topics. 

Ross Mead

PhD Candidate

Interaction Lab               University of Southern California

I am a Computer Science PhD student, a former NSF Graduate Research Fellow, and a former fellow of the USC Body Engineering Los Angeles program (part of the NSF GK-12 initiative) at the University of Southern California (USC). My research focuses on the principled design and modeling of fundamental social behaviors (such as social spacing, eye gaze, gesturing, turn-taking, and other nonverbal social cues) that serve as building blocks to facilitate natural face-to-face human-robot interactions. For over a decade, I have been involved with robotics outreach programs, such as Botball and FIRST, serving as an international program instructor, regional coordinator, competition designer, event host, technical mentor, and seasoned competitor.  My goal is to use sociable robotics topics to increase interest and self-efficacy of K-12 students underrepresented in STEM, such as females, African-Americans, Latinos/Latinas, and Native Americans.

Ahsan Nawroj

PhD Candidate

Social Robotics Lab

Yale Department of Computer Science and Department of Mechanical Engineering

I am a graduate student in Mechanical Engineering and Computer Science. My primary interest is in modular re-configurable robots and computational study of their structure. At the Yale Social Robotics Lab, I worked on studying how robot behavior is interpreted by human beings in the presence of conflicting social cues. I also design and augment robotic test platforms such as the MyKeepon robots to study interpretations of social cues in human beings.

 

Aditi Ramachandran

PhD Candidate

Social Robotics Lab

Yale Department of Computer Science

I am interested in leveraging social robots as one-on-one tutoring agents, with the goal of using reinforcement learning techniques to provide personalization within robot-child tutoring interactions. I would like to create robotic systems that adapt to individuals by making use of both the user’s affective reactions and progress through a learning task. I am working towards building an adaptive tutoring system that learns over time which actions to take for a given individual, based on these two types of feedback from the user. As every learner is different, creating engaging robots that personalize behavior to individual users can lead to more effective learning. I am primarily interested in how social robots can foster learning gains for children in a tutoring scenario in which each child practices math problems with the robot.

Elaine Short

PhD Candidate

Interaction Lab               University of Southern California

I am a Computer Science PhD student at the University of Southern California, working in the Interaction Lab with Maja Matarić.  I am an NSF Graduate Research Fellow and former USC Provost’s Fellow, and the recipient of the USC Viterbi School of Engineering Merit Award.  I have previously worked in the Yale Social Robotics Lab under Brian Scassellati on a rock-paper-scissors playing robot, as well as several studies examining the interaction between children with autism and socially assistive robots.  Additionally, I spent a summer visiting the Healthcare Robotics Lab at Georgia Tech, assisting with a study exploring older adults’ acceptance of an in-home assistive robot.  My research focuses on understanding the dynamics of child-robot social interactions in order to address children’s educational needs in the general education classroom.  I am especially interested in using socially assistive robotics as a tool for improving educational outcomes for children with autism and other developmental challenges.  My current research is on modeling interaction dynamics in free play with a social robot and multiple children.

Sam Spaulding

Graduate Student

Personal Robots Group

MIT Media LabSam Spaulding

I am a second year graduate student in the Personal Robots Group at the MIT Media Lab, advised by Dr. Cynthia Breazeal. My research focuses on developing the technology for perceptive, artificially intelligent robotic tutors, capable of recreating the perceptual, emotive, and empathetic abilities of good human teachers. I received a BS in Computer Science from Yale Unviersity, where I worked on research to isolate and establish the importance of personalization and physical embodiment for robot tutors. At MIT, I am developing robots that can sense affective and emotional data and algorithms for integrating that data into semantically useful student models.

Sarah Strohkorb

1st Year PhD Student

Social Robotics Lab

Yale Department of Computer Science

I am interested in developing personalized motivational strategies to maintain user engagement in long-term human-robot interactions to maximize learning gains. My goal is to build a robust user model that is able to detect the user’s knowledge, effort, and affect to determine an optimal and personalized motivational strategy. This model will augment our ability to teach skills that require long-term practice or maintenance such as nutrition or foreign language.  I am currently working on a project to automatically detect the social dominance of children in group interactions. When one robot is interacting with multiple children, it is important the robot ensures that children with lower social dominance are engaged and included. Thus, the ability to detect social dominance will enable a robot to ensure a higher level of engagement in groups.

Katherine Tsui

Postdoctoral Associate

Social Robotics Lab

Yale Department of Computer Science

My research interests have always stemmed from the intersection of 3 domains: assistive technology, human-robot interaction, and user experience. I am passionate about increasing the quality of life for people outlying the general populace using assistive robotic devices. I am currently investigating how the simultaneous acquisition of signed and spoken language in infants can be facilitated by “signing creatures.”  Is it possible for an anthropomorphic robot, a “signing creature,” to also take on the role of an interactive communication partner? Perhaps yes, if it can engage with an infant by responding with on-topic, signed and spoken language, and demonstrating appropriate social cues (e.g., eye contact, gaze following) and facial expressions. Pairs of signing creatures might act out topic-related skits together for an infant to observe. Alternatively, as a co-viewer, one signing creature might mediate the other’s infant-directed monologue. Signing creatures have the potential to provide infants with consistent exposure to and repetition of signs, and further, to keep pace with an infant’s rate of language acquisition by incorporating new signs.