News

About me

Hi! My name is Victoria Zhanqi Zhang, 张展旗. I am a Ph.D. student in Computer Science at UCSD, supported by HDSI Phd Fellowship. I am co-advised by Dr. Mikio Aoi and Dr. Gal Mishne. Previously, I studied Computer Science and Electrical Engineering at Washington University in St. Louis. I was advised by Dr. Carlos Ponce as an undergrad research student.

I am driven to apply deep learning models as theories of neural computations to answer long-standing questions in science. Are there natural learning principles in brain computation? How can we use these principles to reproduce sophisticated AI? Using tools in computer science, neuroscience, and signal processing, I attempt to construct behavior models to understand disorders in psychiatry. During my research scientist internship at Meta Reality Labs, I work on Muti-modal Representation Learning for hand recognition with neural input wristbands and glasses.

Here are my ongoing work and publications. You can also view my CV. I documented my essays as well as learning notes of deep learning, computational neuroscience, hacking tricks, and everything school DIDN'T teach you about in this blog.

Grew up in Jinan, China; moved to Beijing, China during my teens; Began my undergrad in St. Louis, MO in 2016; living in San Diego, CA for grad school; If you want to learn more about me, here is my story and fragments of my journey.

In my free time, I enjoy traveling, painting, hiking, and surfing. I live with happy free-flying birds: cockatiel Ashe, Pearl, and parakeet Kiwi. I DIY swings and castles for them and teach them songs and cool tricks. Art is a big part of my life. Here is the art portfolio of some of my watercolor and color pencil drawings.

Graduate School Research

    Mishne Lab and Aoi Lab co-advised by Dr. Gal Mishne and Dr. Mikio Aoi

    Computer Science | University of California, San Diego

    This interdisciplinary research project aims to build a framework to answer questions in computational psychiatry. It uses probabilistic reasoning to quantify hallmark features related to bipolar disorder. By combining our methods with neural activity dynamics, we could further associate behaviors to neural recordings to gain more insights on how information processing maps onto behaviors, putting us one step closer to understanding the link between the brain and the mind.

Undergrad Research

  • Ponce Lab advised by Dr. Carlos Ponce

    Neurobiology | Harvard Medical School

    Automated visual recognition has the potential to change many facets of society, from biomedical imaging to security and transportation. Convolutional neural networks (CNNs) are the best models for visual recognition, and while they show enormous promise, they are notably vulnerable to "black-box attacks" -- malicious inputs designed to make the networks make mistakes. Because CNNs share many properties with the brain, we can understand what kind of attacks are particularly effective on the most resilient CNNs, by crafting attacks that manipulate activity in the brain. The Ponce lab has shown that it is possible to use generative adversarial networks (GANs) to maximize the activity of individual neurons in the brain, through the synthesis of artificial images. I explored which types of GANs are best in achieving this goal, in both macaque brains and convolutional neural networks.

  • Aravamuthan Lab advised by Dr. Bhooma Aravamuthan

    Neurology | Washington University in St. Louis School of Medicine

    I aimed to develop an open-field, video-based animal pose tracking framework using computer vision. Supervised machine learning tools analyze videos of animal behavior efficiently but are limited by operator’s labeling accuracy. To reduce operator dependency, I developed an unsupervised model based on optical flow to automatically detect and label mouse behaviors. I have shown that it is possible to differentiate between local movements (e.g. rearing), running, and stationary positions while matching human labeling accuracy. Additionally, I combined deep network-guided pose estimation (DeepLabCut) and optical flow to design a clinically feasible video-based dystonia identification tool.

  • AIM Lab advised by Dr. Shantanu Chakrabartty

    Electrical Engineering | Washington University in St. Louis

    I investigated sonification techinique that can be used to visualize high-dimensional data like images. The goal is to understand the benefits of sonification compound to visual representations especially in the context of human-in-the-loop systems

Teaching

Contact

Feel free to email me or DM me on Twitter if you want to get in touch!

You can also leave me a message using the form below: