I am currently applying to PhD programs in the 2021-2022 admissions cycle to further my research interests in human-AI collaboration. I am also applying to industry internship programs to gain more applied AI/ML experience that will guide my pursuits in a prospective PhD program.

Status: I am graduating in December 2020, so I will be available for an internship between January 2021 and August or September 2021 when the next academic year begins. I am open to the possibility of relocation or remote work depending on developments with COVID-19.

Interests: I am interested in both the engineering and research side of AI industry work, or a hybrid of both perspectives, in order to experience how my research work can be applied in practice. I aim to jointly leverage my software engineering experience from industry with my AI/ML research experience from academia.

Check out my CV for a more detailed accounting of my industry and academic experiences, but here’s a quick summary:

Professional Industry Experience

  • | Lab126
    • Position: Software Development and Engineering Intern
    • Project: Designed, implemented, tested, and internally deployed fully-independent project for Amazon Alexa. This proof on concept project extended my team’s work on location-based services toward a new business-oriented use case.
    • Programming language: Java
    • Date: Summer 2018
  • Northrop Grumman Corporation | Autonomous Systems
    • Position: Software Engineering Intern
    • Project: Developed robust and fail-safe software for flight critical systems in autonomous aircraft. This work included running real-time hardware simulations of analog devices and regression testing of all new behaviors.
    • Programming languages: C++ and Lua
    • Date: Summer 2017

Academic AI/ML Research Experience

  • Undergraduate Research Assistant | Stanford University
    • Title: Learning Object Representations with Predicate Functions: Enabling Few-Shot Scene Graph Prediction
    • Labs: Vision and Learning Lab, Human-Computer Interaction Group
    • Project: Researched novel deep learning model combining few-shot learning and scene graph prediction for the first time. Architecture based on Graph Convolutions, CNNs, and MLPs defines a relationship-oriented embedding space for objects in the scene.
    • Date: April 2018 – May 2019
  • Undergraduate Research Assistant | Stanford University
    • Title: HYPE: A Benchmark for Human eYe Perceptual Evaluation of Generative Models
    • Labs: Vision and Learning Lab, Human-Computer Interaction Group
    • Project: Researched novel crowdsourcing framework to scalably and accurately evaluate human perception of generative ML models. More direct than automated proxies, and cheaper and more consistent than other human evaluation.
    • Date: December 2018 – May 2019
  • Graduate Research Assistant | Stanford University
    • Title: Improving Human-AI Collaboration by Quickly Adapting to Diverse Human Collaboration Preferences
    • Labs: Vision and Learning Lab, Human-Computer Interaction Group
    • Project: Leading ongoing research toward a novel self-supervised framework to learn a diverse set of user-independent, task-agnostic collaborative sub-goals within an environment. Enables more feasible interpretations and responses to diverse user actions.
    • Date: June 2019 – Present