Week Ending 10.18.2020
RESEARCH WATCH: 10.18.2020
This week was very active for "Computer Science - Artificial Intelligence", with 221 new papers.
The paper discussed most in the news over the past week was "Its Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners" by Timo Schick et al (Sep 2020), which was referenced 8 times, including in the article BERT, GPT-x, and XLNet: AE, AR, and the Best of Both Worlds in Medium.com. Anna Rogers (University of Massachusetts Lowell), who is not part of the study, said "More data & compute = SOTA". The paper got social media traction with 359 shares. The researchers show that performance similar to GPT-3 can be obtained with language models whose parameter count is several orders of magnitude smaller. A Twitter user, @timo_schick, said "🎉 New paper 🎉 We show that language models are few-shot learners even if they have far less than 175B parameters. Our method performs similar to GPT-3 on SuperGLUE after training on 32 examples with just 0.1% of its parameter count: #NLProc".
The paper shared the most on social media this week is by a team at University of North Carolina at Chapel Hill: "Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision" by Hao Tan et al (Oct 2020)
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 268 new papers.
The paper discussed most in the news over the past week was by a team at Microsoft: "VIVO: Surpassing Human Performance in Novel Object Captioning with Visual Vocabulary Pre-Training" by Xiaowei Hu et al (Sep 2020), which was referenced 12 times, including in the article Microsoft’s new image-captioning AI will help accessibility in Word, Outlook, and beyond in The Verge. The paper author, Lijuan Wang (Microsoft), was quoted saying "The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?". The paper got social media traction with 12 shares. The investigators present VIsual VOcabulary pre - training (VIVO) that performs pre - training in the absence of caption annotations.
Leading researcher Dhruv Batra (Georgia Institute of Technology) came out with "Contrast and Classify: Alternate Training for Robust VQA".
The paper shared the most on social media this week is by a team at University of North Carolina at Chapel Hill: "Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision" by Hao Tan et al (Oct 2020)
This week was active for "Computer Science - Computers and Society", with 30 new papers.
The paper discussed most in the news over the past week was by a team at Indiana University: "Integrating Machine Learning with HPC-driven Simulations for Enhanced Student Learning" by Vikram Jadhao et al (Aug 2020), which was referenced 1 time, including in the article What’s New in HPC Research: Underwater Robots, Cervical Cancer, Renewable Energy & More in HPC Wire. The paper got social media traction with 5 shares.
The paper shared the most on social media this week is "Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI" by Alon Jacovi et al (Oct 2020) with 76 shares. @yoavgo ((((ل()(ل() 'yoav))))) tweeted "Alon continues his foundational and heroic quest of properly defining the terms we are talking about. Here: What does it mean to "trust" an AI? Detailed thread about a detailed paper, which we hope will appeal to both the technical and non-technical crowds".
This week was active for "Computer Science - Human-Computer Interaction", with 31 new papers.
The paper discussed most in the news over the past week was "Effective Favor Exchange for Human-Agent Negotiation Challenge at IJCAI 2020" by Kushal Chawla et al (Sep 2020), which was referenced 1 time, including in the article Pilot: A virtual agent that can negotiate with humans in Tech Xplore. The paper author, Kushal Chawla (Adobe), was quoted saying "Previous editions of the ANAC competition have seen agents struggle with the trade-off between the total number of points scored and the perception of the agent in the eyes of the opponent, both of which are important metrics in a negotiation". The paper was shared 4 times in social media.
This week was extremely active for "Computer Science - Learning", with 490 new papers.
The paper discussed most in the news over the past week was by a team at Microsoft: "VIVO: Surpassing Human Performance in Novel Object Captioning with Visual Vocabulary Pre-Training" by Xiaowei Hu et al (Sep 2020)
Leading researcher Yoshua Bengio (Université de Montréal) published "Neural Function Modules with Sparse Arguments: A Dynamic Approach to Integrating Information across Layers".
The paper shared the most on social media this week is by a team at University of North Carolina at Chapel Hill: "Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision" by Hao Tan et al (Oct 2020)
Over the past week, eight new papers were published in "Computer Science - Multiagent Systems".
Over the past week, 30 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was by a team at Google: "Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves" by Luke Metz et al (Sep 2020), which was referenced 1 time, including in the article Artificial general intelligence: Are we close, and does it even make sense to try? in Technology Review. The paper also got the most social media traction with 583 shares. The researchers focus on general - purpose learned optimizers capable of training a wide variety of problems with no user - specified hyperparameters. On Twitter, @Luke_Metz commented "We have a new paper on learned optimizers! We used thousands of tasks (and a lot of compute 😬) to train general purpose learned optimizers that perform well on never-before-seen tasks, and can even train new versions of themselves. 1/8".
This week was very active for "Computer Science - Robotics", with 83 new papers.
The paper discussed most in the news over the past week was by a team at Google: "TNT: Target-driveN Trajectory Prediction" by Hang Zhao et al (Aug 2020), which was referenced 4 times, including in the article 3 Autonomous Vehicle Stocks That Are Changing the World in InvestorPlace.com. The paper got social media traction with 13 shares. On Twitter, @yuning_chai observed "New Waymo papers! Very lucky to be part of these amazing works for tracking and prediction in autonomous driving: SoDA: Multi-Object Tracking with Soft Data Association: TNT: Target-driveN Trajectory Prediction".
The paper shared the most on social media this week is by a team at Stanford University: "Learning Adaptive Language Interfaces through Decomposition" by Siddharth Karamcheti et al (Oct 2020) with 63 shares. @AnasPwnapple (Anas Abou Allaban) tweeted "Would be interesting to see something like this tied directly to robot actions. Maybe through latent representations or just simple RL?".