Week Ending 4.12.2020

 

RESEARCH WATCH: 4.12.2020

 
ai-research.png

Over the past week, 81 new papers were published in "Computer Science - Artificial Intelligence".

  • The paper discussed most in the news over the past week was by a team at Google: "Placement Optimization with Deep Reinforcement Learning" by Anna Goldie et al (Mar 2020), which was referenced 8 times, including in the article Google Proposes AI as Solution for Speedier AI Chip Design in All About Circuits. The paper author, Azalia Mirhoseini (Google), was quoted saying "We have already seen that there are algorithms or neural network architectures that… don’t perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn't exist". The paper got social media traction with 26 shares. The researchers start by motivating reinforcement learning as a solution to the placement problem. On Twitter, @ogawa_tter commented "> Google, Patent Appl Device placement optimization with reinforcement learning, Oct 3 2019 Hierarchical device placement with reinforcement learning, Dec 26 2019 J. Dean, ISSCC 2020".

  • Leading researcher Ruslan Salakhutdinov (Carnegie Mellon University) published "Learning to Explore using Active Neural SLAM". This paper was also shared the most on social media with 179 tweets.

This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 284 new papers.

Over the past week, 23 new papers were published in "Computer Science - Computers and Society".

Over the past week, 22 new papers were published in "Computer Science - Human-Computer Interaction".

  • The paper discussed most in the news over the past week was "Interactive Neural Style Transfer with Artists" by Thomas Kerdreux et al (Mar 2020), which was referenced 1 time, including in the article Psychedelic Style Transfer in Medium.com. The paper got social media traction with 20 shares. On Twitter, @gastronomy observed "> We present interactive painting processes in which a painter and various neural style transfer algorithms interact on a real canvas. Understanding what these algori".

This week was very active for "Computer Science - Learning", with 383 new papers.

Over the past week, 15 new papers were published in "Computer Science - Multiagent Systems".

  • The paper discussed most in the news over the past week was by a team at The University of Sydney: "Modelling transmission and control of the COVID-19 pandemic in Australia" by Sheryl L. Chang et al (Mar 2020), which was referenced 47 times, including in the article COVID-19: Isolation Is A Marathon, Not A Sprint in PopularResistance.Org. The paper author, Mikhail Prokopenko (The University of Sydney), was quoted saying "If we want to control the spread of COVID-19 – rather than letting the disease control us – at least eighty per cent of the Australian population must comply with strict social distancing measures for at least four months". The paper also got the most social media traction with 632 shares. The authors develop an agent - based model for a fine - grained computational simulation of the ongoing COVID-19 pandemic in Australia. A Twitter user, @arthaey, posted "This paper models 80-90% social distancing compliance is needed, & only works while we KEEP doing it, until a vaccine: (blue line is 70% compliance, red 80%, yellow 90%; spikes later are when social distancing is lifted)".

This week was active for "Computer Science - Neural and Evolutionary Computing", with 41 new papers.

  • The paper discussed most in the news over the past week was by a team at Google: "AutoML-Zero: Evolving Machine Learning Algorithms From Scratch" by Esteban Real et al (Mar 2020), which was referenced 4 times, including in the article Artificial intelligence is evolving all by itself in Science Magazine. The paper also got the most social media traction with 1321 shares. A user, @tomvarsavsky, tweeted "One of the most interesting results I've seen in ML in the last 5 years. Evolving programs using a generic search space and generic mutations leads to the discovery of not only SGD and two layer NNs but also rand init, ReLU, Grad Norm. Can someone find a hidden inductive bias?".

  • Leading researcher Quoc V. Le (Google) came out with "Evolving Normalization-Activation Layers", which had 50 shares over the past 4 days. @hardmaru tweeted "List of papers about automating improvements for deep learning: • better architectures from known building blocks • better activation functions • better learning rules than sgd/adam • better data augmentation strategies • better loss functions • better normalization layers". This paper was also shared the most on social media with 556 tweets. @hardmaru (hardmaru) tweeted "List of papers about automating improvements for deep learning: • better architectures from known building blocks • better activation functions • better learning rules than sgd/adam • better data augmentation strategies • better loss functions • better normalization layers".

Over the past week, 43 new papers were published in "Computer Science - Robotics".


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.