Shen Yan

yanshen6 at msu dot edu

My name is Shen Yan (严珅). I am a PhD candidate at the Computer Science Department at Michigan State Univiersity, where I work on unsupervised learning, AutoML and their applications. I am advised by Mi Zhang.

I got my M.S. in Computer Engineering from RWTH Aachen University, where I worked with Hermann Ney on face recognition and Jens-Rainer Ohm on image compression. I have a B.S. in Telecommunications engineering from Xidian University.

GitHub  /  Google Scholar / Twitter / LinkedIn / CV

profile photo

Research

My general interests lie in machine learning and computer vision. Currently, I focus on unsupervised learning and AutoML. This fits well with a nascent and fast-evolving research field referred to as neural architecture search. My work covers the entire spectrum of research in this domain: from studying single/multi-fidelity optimization methods, to understand architecture encodings and its clustering effect through theoretical analysis and empirical evaluations. Representative papers are highlighted.

NAS-Bench-x11 and the Power of Learning Curves
Shen Yan*, Colin White*, Yash Savani, Frank Hutter
NeurIPS, 2021  
arXiv / code / bibtex

A surrogate method to create multi-fidelity NAS benchmarks and demonstrate the power of using learning curve extrapolation.

CATE: Computation-aware Neural Architecture Encoding with Transformers
Shen Yan, Kaiqiang Song, Fei Liu, Mi Zhang
ICML, 2021 (Long Presentation)
video: 17 min/ arXiv / code / bibtex / poster

Learning the many-to-one mapping from modeling the computation relationship between different architectures helps to build a flatter performance landscape.

Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?
Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, Mi Zhang
NeurIPS, 2020  
video: 3 min/ arXiv / code / bibtex / poster

Pre-training architecture representations without using accuracies can better preserve the local structure relationship of neural architectures in the latent space.

MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution
Taojiannan Yang, Sijie Zhu, Chen Chen, Shen Yan, Mi Zhang, Andrew Wills
ECCV, 2020   (Oral)
video: 10 min/ arXiv / code / bibtex

The proposed mutual learning method for input resolution and network width significantly improves the accuracy-efficiency tradeoffs over slimmable networks.

Improve Unsupervised Domain Adaptation with Mixup Training
Shen Yan, Huan Song, Nanxiang Li, Lincan Zou, Liu Ren
arXiv, 2020
arXiv / code / bibtex

We propose to enforce training constraints across domains using mixup formulation to directly address the adaptation performance for unlabeled target data.

HM-NAS: Efficient Neural Architecture Search via Hierarchical Masking
Shen Yan, Biyi Fang, Faen Zhang, Yu Zheng, Xiao Zeng, Hui Xu, Mi Zhang
ICCV Neural Architects Workshop, 2019   (Best Paper Nomination)
arXiv / bibtex

Highlight the importance of topology learning in differentialable NAS.

Deep Fisher Faces
Harald Hanselmann, Shen Yan, Hermann Ney
BMVC, 2017
bibtex

In this work we further extend the center (intra-class) loss with an inter-class loss reminiscent of the popular early face recognition approach Fisherfaces.

Experience

Research Intern, Student Researcher, Summer 2021, Fall 2021
Google Research, Mountain View, USA
Host: Xuehan Xiong, Zhichao Lu, Chen Sun, Cordelia Schmid

Research on giant video Transformers.

Research Intern, Spring 2021
Abacus.AI, San Francisco, USA
Host: Colin White

Research on multi-fidelity AutoML.

Applied Machine Learning Intern, Summer 2020
ByteDance Inc., Mountain View, USA
Host: Ming Chen, Youlong Cheng

Neural architecture search for large scale advertising models on TPU Pods.

Research Intern, Summer 2019
Bosch Research, Sunnyvale, USA
Host: Huan Song, Liu Ren

Unsupervised domain adaptation with image and time-series data.

Research Intern, Summer 2017
eBay Research, Aachen, Germany
Host: Shahram Khadivi, Evgeny Matusov

Adapt neural machine translation to e-commerce domains, published as oral presentation on IWSLT 2018.

Service

PC member, AutoML Workshop, ICML 2021

PC member, NAS Workshop, ICLR 2021

Reviewer, ICML 2020, 2021

Reviewer, NeurIPS 2020, 2021

Reviewer, ICLR 2021, 2022

Reviewer, CVPR 2021, 2022

Reviewer, ICCV 2021
TA for Bachelor, Kinect Programming, Fall 2015


Award

Top Reviewers, ICML 2020

4th Place of Google MicroNet Challenge, NeurIPS 2019

Best Paper Award Nominee, ICCV Neural Architects Workshop, 2019

World Finalist, Kaggle Data Science Game, 2016

Summer School Exchange Student, Tsinghua University, 2015

Meritorious Winner, International Mathematical Contest In Modeling (MCM), 2014

First Prize Scholarship, Xidian University, 2012-2014

This guy makes a nice webpage.