Paper Readings (Spring 2022)
This assignment can be done in teams of 1 or 2 students. For each selected paper, your team is required to prepare and record a presentation of a maximum of 30 minutes.
Your team will upload a recorded video to YouTube (it can be a private video), then include the link to the video in your presentation (after the title slide). You will submit your presentation in PDF format via Gradescope, no later than Apr 24th 11:59pm – Please make sure all team members names are included;
List of Papers
- A New Perspective on "How Graph Neural Networks Go Beyond Weisfeiler-Lehman?" [PDF]. Ray Turrisi
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [PDF], Christopher Woyak
- Deformable DETR: Deformable Transformers for End-to-End Object Detection [PDF], Travis Frink
- FastSpeech 2: Fast and High-Quality End-to-End Text to Speech [PDF], Lily Sisouvong and Emmely Trejo Alvarez
- Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution [PDF], Matt McAdams
- GIRAFFE: Representing Scenes As Compositional Generative Neural Feature Fields [PDF], Pedro Mesquita and Liyuan Gong
- GradMax: Growing Neural Networks using Gradient Information [PDF], Justin Maio
- Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels [PDF], Nicholas Clavette
- Language-Agnostic Representation Learning of Source Code from Structure and Context [PDF], Javier Vela
- Language Models are Few-Shot Learners [PDF], Brennan Richards
- Learning in High Dimension Always Amounts to Extrapolation [PDF], Daniel Marasco
- N-BEATS: Neural basis expansion analysis for interpretable time series forecasting [PDF], Shuang Wang and Erkan Karakus
- Open-Set Recognition: A Good Closed-Set Classifier is All You Need [PDF], James McIntyre and John McLinden
- ViTGAN: Training GANs with Vision Transformers [PDF], Maryam KafiKang
- When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations [PDF], Alfred Timperley and Bolaji Oladipo