Week Ending 7.12.2020

 

RESEARCH WATCH: 7.12.2020

 
ai-research.png

This week was active for "Computer Science - Artificial Intelligence", with 131 new papers.

  • The paper discussed most in the news over the past week was "Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides" by Akshat Pandey et al (Jun 2020), which was referenced 11 times, including in the article Uber and Lyft overcharge riders going to and from disadvantaged areas in Tech Xplore. The paper author, Aylin Caliskan (George Washington University), was quoted saying "When machine learning is applied to social data, the algorithms learn the statistical regularities of the historical injustices and social biases embedded in these data sets". The paper got social media traction with 74 shares. The authors develop a random - effects based metric for the analysis of social bias in supervised machine learning prediction models where model outputs depend on U.S. locations. A Twitter user, @DavidZipper, said "Analyzing 100 million Chicago ride hail trips, researchers found significant evidence of bias. Algorithms used by Uber/Lyft/Via led to higher fares for those going to neighborhoods with a high share of minority or older residents, for example. DL link".

  • Leading researcher Oriol Vinyals (DeepMind) published "Strong Generalization and Efficiency in Neural Programs" @hardmaru tweeted "Their learned programs can outperform hand-coded programs in terms of efficiency on several algorithmic tasks, such as sorting, searching in ordered lists and a version of the 0/1 knapsack problem, while also generalizing to instances of arbitrary length".

  • The paper shared the most on social media this week is "Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence" by Shakir Mohamed et al (Jul 2020) with 497 shares. The authors explore the important role of critical science, and in particular of post - colonial and decolonial theories, in understanding and shaping the ongoing advances in artificial intelligence. @natematias (J. Nathan Matias) tweeted "Encouraged to see more scholars connect postcolonial theory to tech. Nine years ago when I started my PhD, I quickly learned that CS wasn't interested in or even able to see that part of my expertise. Now that's changing".

This week was very active for "Computer Science - Computer Vision and Pattern Recognition", with 313 new papers.

  • The paper discussed most in the news over the past week was by a team at INRIA: "End-to-End Object Detection with Transformers" by Nicolas Carion et al (May 2020), which was referenced 6 times, including in the article Running Deep Learning Models is complicated and here is why in Towards Data Science. The paper also got the most social media traction with 639 shares. A Twitter user, @theairbend3r, said "End-to-End Object Detection with Transformers by Facebook AI DETR uses a conventional CNN backbone to learn a 2D representation of an input image. #deeplearning #machinelearning #PyTorch #research", while @KrAbhinavGupta said "Very interesting".

  • Leading researcher Pieter Abbeel (UC Berkeley) published "Self-Supervised Policy Adaptation during Deployment" @AravSrinivas tweeted "Cool paper by et al that proposes to use self-supervised updates like CURL/IDM and data-augmentations at test-time in order to adjust to new unseen test environments".

  • The paper shared the most on social media this week is "NVAE: A Deep Hierarchical Variational Autoencoder" by Arash Vahdat et al (Jul 2020) with 532 shares. @EtherealEq (Eleanor Q) tweeted "I love working with VAEs and I'm really glad that we're taking another step forward. These are lovely results! Please release good, usable code! Help me cite you! tl;dr SOTA is cool, but easily usable is cooler (1/n)".

This week was very active for "Computer Science - Computers and Society", with 59 new papers.

This week was active for "Computer Science - Human-Computer Interaction", with 32 new papers.

  • The paper discussed most in the news over the past week was by a team at IBM: "A Methodology for Creating AI FactSheets" by John Richards et al (Jun 2020), which was referenced 1 time, including in the article IBM FactSheets Further Advances Trust in AI in IBM Research. The paper got social media traction with 12 shares. A Twitter user, @makedatahealthy, said "team created FactSheets as a form of AI documentation tailored to the AI model and the needs of its target audience. Thanks for citing DNP and we're excited to see the project develop!".

This week was extremely active for "Computer Science - Learning", with 467 new papers.

Over the past week, 18 new papers were published in "Computer Science - Multiagent Systems".

  • The paper discussed most in the news over the past week was "Simulating COVID-19 in a University Environment" by Philip T. Gressman et al (Jun 2020), which was referenced 98 times, including in the article Model Simulates How Quickly COVID-19 Can Spread On A College Campus in Northern Public Radio. The paper author, Jennifer Peck, was quoted saying "What we’ve shown is that if the social side is under control, you can manage the spread through academic contacts. So having said that, can you get the social side of the contacts under control? At this point, that’s a first-order question". The paper also got the most social media traction with 153 shares. A user, @DCBPhDV2, tweeted "Y'all. "In the absence of any intervention, all scenarios end with effectively all susceptible community members developing #COVID19 by the end of the semester, with peak infection rates reached between 20 and 40 days into the semester." #HigherEd".

  • Leading researcher Sergey Levine (University of California, Berkeley) published "Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions", which had 22 shares over the past 5 days. The investigators seek to establish a framework for directing a society of simple, specialized, self - interested agents to solve what traditionally are posed as monolithic single - agent sequential decision problems. @kovasb tweeted "Love this. Tried to get a similar model working 15 years ago with cellular automata, but could never figure out a reasonable currency". This paper was also shared the most on social media with 117 tweets. @kovasb (Kovas Boguta) tweeted "Love this. Tried to get a similar model working 15 years ago with cellular automata, but could never figure out a reasonable currency".

Over the past week, 32 new papers were published in "Computer Science - Neural and Evolutionary Computing".

  • The paper discussed most in the news over the past week was "An Astrocyte-Modulated Neuromorphic Central Pattern Generator for Hexapod Robot Locomotion on Intels Loihi" by Ioannis Polykretis et al (Jun 2020), which was referenced 1 time, including in the article Using astrocytes to change the behavior of robots controlled by neuromorphic chips in Tech Xplore. The paper author, Michmizos, was quoted saying "As we continue to increase our understanding of how astrocytes work in brain networks, we find new ways to harness the computational power of these non-neuronal cells in our neuromorphic models of brain intelligence, and make our in-house robots behave more like humans". The paper got social media traction with 7 shares. The researchers propose a brain - morphic CPG controler based on a comprehensive spiking neural - astrocytic network that generates two gait patterns for a hexapod robot. A Twitter user, @Olumide_jfST, commented "Seeing this is so satisfying 😭Neuro peeps who know me personally KNOW I’ve always wondered why astrocytes haven’t been included in typical paradigms for modeling network activity. Perhaps it begs the question, will they ever be included in CONVNETS? 🤔".

  • Leading researcher Oriol Vinyals (DeepMind) came out with "Strong Generalization and Efficiency in Neural Programs" @hardmaru tweeted "Their learned programs can outperform hand-coded programs in terms of efficiency on several algorithmic tasks, such as sorting, searching in ordered lists and a version of the 0/1 knapsack problem, while also generalizing to instances of arbitrary length". This paper was also shared the most on social media with 399 tweets. @hardmaru (hardmaru) tweeted "Their learned programs can outperform hand-coded programs in terms of efficiency on several algorithmic tasks, such as sorting, searching in ordered lists and a version of the 0/1 knapsack problem, while also generalizing to instances of arbitrary length".

This week was active for "Computer Science - Robotics", with 56 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.