Week Ending 08.31.2018

 

RESEARCH WATCH: 08.31.2018

 
ai-research.png
 

OVER THE PAST WEEK, 33 NEW PAPERS WERE PUBLISHED IN "COMPUTER SCIENCE - ARTIFICIAL INTELLIGENCE".
 
The paper discussed most in the news over the past week was by a team at Columbia University: "Neural Network Quine" by Oscar Chang et al (Mar 2018), which was referenced 5 times, including in the article Researchers Selected to Develop Novel Approaches to Lifelong Machine Learning in DARPA. 

→ The paper got social media traction with 233 shares. 
→ The investigators describe how to build and train self-replicating neural networks. 


Over the past month, 144 new articles were published — 15% higher than the average monthly rate. The new articles cover all of the 32 topics in this space of 2000 articles.



→ Program & Answer Set Programming was the topic with the largest increase in research activity this month, with about two times its average rate of 5 papers/month.

→ Research on Recognition & Text and Deep Reinforcement Learning & Autonomy remained strong.

→ Meanwhile, there was a significant drop off in research on Machine Learning & Prediction which was averaging 12 papers/month but only saw 4 new papers over the past month.


The paper discussed most in the news over the past 12 months was "When Will AI Exceed Human Performance? Evidence from AI Experts" by Katja Grace et al (May 2017), which was referenced 234 times, including in the article Experts Predict When Artificial Intelligence Will Exceed Human Performance in Technology Review. 



→ The paper got social media traction with 2893 shares.

→ The investigators report the results from a large survey of machine learning researchers on their beliefs about progress in AI.

→ On Twitter, @wimrampen commented "I use this research in my keynote slides to highlight that: 1. Many simple tasks will be automated in the 10 years 2. There's a 50% chance that in 50 years AI will take over ALL human tasks 3. There's huge difference of opinions on these predictions".


The most shared paper in social media over the past 12 months was by a team at DeepMind, "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" by David Silver et al (Dec 2017) which was shared 3842 times.


QUINE: 
SINE QUA NON?
 

This is the kind of stuff that makes headlines because “self replicating neural network” sounds intriguing, but does not have too much substance to back them up.


It also doesnʼt exactly work (can only copy itself approximately), and was rejected from the 2018 ICLR workshops. 

The application stated in the paper (evolutionary algorithms for generating other neural network structures) are better found in
https://arxiv.org/abs/1712.06567 or https://arxiv.org/abs/1707.07012


FACTS

→ Quine is a technical term for a software program that outputs its own source code.
→ The degenerate version is a zero quine: a blank program that also produces a blank program as output.
→ An example in python is “_='_=%r;print _%%_';print _%_” which would print itself exactly.
→ Interesting concept philosophically (read Douglas Hofstadterʼs Godel Escher Bach for more fun)
→ Practically though, its uses seem to be more limited.


PROBLEM

Number of parameters that make up the neural net greatly exceeds the number of outputs due to the way neural networks are defined.


SOLUTION

→ Encode the neural network parameters indirectly using a concept called HyperNEAT which allows the network to implicitly store and output its own structure.
→ Using HyperNEAT, the quine network takes a specific coordinate or location in its own structure as its input and outputs the parameter details at that location.
→ To reconstruct the full neural network, feed the quine network all the coordinates and reconstruct the full neural net based on the output.
→ The vanilla quine is designed to only replicate itself, while the auxiliary quine  was trained to simultaneously try to classify handwritten digits properly through a dataset called MNIST while still preserving the self-replication property.
→ The vanilla quine was able to largely replicate itself with some small margin of error, but the auxiliary quine was not able to properly do both tasks.

- Hugh Zhang, Stanford


GAMES COMPUTERS PLAY
 

This was a huge paper when it came out ~6 months ago as the final paper in the AlphaGo saga.




BACKGROUND

→ In March 2016, Google Deepmindʼs AlphaGo triumphed as the first computer program to defeat the worldʼs best player Lee Sedol at the board game Go.
→ Building a stronger-than-human computer Go agent had long been a huge AI benchmark. Prior methods relied on complete search algorithms like IBMʼs Deep Blue for chess had failed with Go due to the large complexity of Go.
→ It was long hypothesized that some form of human “intuition” was needed to crack Go, and when deep learning exploded into the field in 2013, all eyes turned to see if it would conquer Go as well as computer vision.
→ Over a year later, Deepmind updated AlphaGo into a new form: AlphaGo Zero.
→ While the original AlphaGo relied heavily on human data, AlphaGo Zero learned entirely from scratch.


ALPHAZERO

Itʼs a “more generic version of the AlphaGo Zero algorithm.”


SIMILARITIES

→ Approach combines deep learning with Monte Carlo Tree Search.
→ Monte Carlo Tree Search relies on playing out millions of games using rough heuristics and using the results of those playouts to determine the most effective move.
→ Trained through reinforcement learning by playing itself iteratively and learning from its own mistakes.


DIFFERENCES FROM ALPHAGO TO ALPHAGO ZERO

→ AlphaGo Zero borrowed a technique from computer vision called residual connections, which allowed it to train a much more powerful model.
→ AlphaGo Zero was only given knowledge of the game it was playing and did not use any other information (AlphaGo originally was fed information on a type of strategy called ladders).
→ AlphaZero learned from scratch without using any sort of human data to augment the training process (same as AlphaGo Zero, different from original AlphaGo).


DIFFERENCES FROM ALPHAGO ZERO TO ALPHA ZERO

→ AlphaZero is fully generic and can play any perfect information board game, not just Go.
→ AlphaZero does not use any Go-specific tricks, e.g. taking advantage of symmetries, while training.


RESULTS

→ AlphaZero overpowered Stockfish (chess) after just 4 hours. Shogi lasted even less time (<2 hours).
→ Outperforming its predecessor AlphaGo Lee took the longest at 8 hours.


SIGNIFICANCE

→ Even Deepmindʼs original AlphaGo suffered from the same criticism, as an algorithm that could play Go and nothing else. 
→ AlphaZero shatters that criticism.
→ To be sure, AlphaZero can still only play a specific type of board game: one without any hidden information or random chance. It is still far from being a general purpose AI algorithm. But the mere fact that the exact same algorithm is able to learn to be world class in three exceptionally challenging games is a huge milestone in the road to creating general AI.

- Hugh Zhang, Stanford


THE FUTURE OF EVERYTHING

My gut feeling is that the near term predictions are too pessimistic while the long term results are extraordinarily optimistic. That said, remember the oft cited joke that “nuclear fusion is perpetually 20 years away.”




FACTS

→ Survey led by the Future of Humanity Institute at Oxford University, cofounded by AI Philosopher Nick Bostrom.
→ Asked all researchers who published at 2015 NIPS and ICML (the top two machine learning conferences) about their opinions on when AI would achieve certain milestones
→ Forecasts average estimates of all surveyed researchers
→ AI takes over all jobs from humans: 122 years.
→ < 4 years - Algorithm skilled enough to win the World Series of Poker (This was actually accomplished by Libratus shortly before this survey was published).
→ < 6 years - Robots can fold your laundry.
→ < 8 years - Human level translation + transcription of audio data
→ < 9 years - Computers can perfectly mimic the human voice (Googleʼs recently released Duplex canʼt quite do this yet, but itʼs pretty darn close.)
→ < 10 years - AI writes essays better than the average high schooler.
→ < 12 years - A two legged robot beats the best humans at a 5k race.
→ Algorithm generates a US Top 40 pop song with no human assistance.
→ 30+ years - Algorithm writes a New York Times Bestseller
→ 40+ years - AI can do cutting edge research in mathematics.
→ 80+ years - AI can do cutting edge research on AI (Hilarious that AI researchers think that their own job is the very last one to become automated)
→ AI research is improving at an accelerating rate
→ Asian researchers estimate even faster AI progress than their North American counterparts.

- Hugh Zhang, Stanford

 

EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.