Chao Feng

I am a graduate student at the University of Michigan (UMich).

Email: chfeng at umich dot edu

Email  /  CV  /  Google Scholar  /  Github

profile photo
Research

I'm interested in computer vision and multimodal learning.

unitouch  GPS-to-3D: Lifting Tourist Photos to 3D Using 2D GPS-Conditioned Diffusion
Chao Feng, Ziyang Chen, Aleksander Holynski, Alexei A. Efros, Andrew Owens,
In submission

We produce 3D reconstruction for landmarks from unordered collections of tourist photos by GPS conditioned diffusion model and score distillation sampling.

unitouch  Binding Touch to Everything: Learning Unified Multimodal Tactile Representations
Fengyu Yang*, Chao Feng*, Ziyang Chen*, Hyoungseob Park, Daniel Wang, Yiming Dou,
Ziyao Zeng, Xien Chen, Rit Gangopadhyay, Andrew Owens, Alex Wong,
CVPR, 2024
project page / paper

We introduce UniTouch, a unified tactile representation for vision-based tactile sensors aligned with multiple modalities. We show we can now use powerful models trained on other modalities (e.g. CLIP, LLM) to conduct tactile sensing tasks zero shot.

unitouch  Vision-Flan: Scaling Human-Labeled Tasks in Visual Instruction Tuning
Zhiyang Xu, Chao Feng, Rulin Shao, Trevor Ashby, Ying Shen, Di Jin,
Yu Cheng, Qifan Wang, Lifu Huang,
ACL, 2024 (Findings)
project page / paper

We construct Vision-Flan, the most diverse publicly available visual instruction tuning dataset to date.

b3do Self-Supervised Video Forensics by Audio-Visual Anomaly Detection
Chao Feng, Ziyang Chen, Andrew Owens,
CVPR, 2023   (Highlight)
project page / arXiv / code

We learn several feature sets in a self-supervised manner by using audio-visual synchronization task and utilize autoregressive model to do anomaly detection on top of each feature set for video forensics detection.

b3do AVA-AVD: Audio-Visual Speaker Diarization in the Wild
Eric Zhongcong Xu, Zeyang Song, Satoshi Tsutsui, Chao Feng, Mang Ye, Mike Zheng Shou,
ACM Multimedia, 2022
project page / arXiv / code

We create the AVA Audio-Visual Diarization (AVA-AVD) dataset to develop diarization methods for in-the-wild videos.


Credit