NIPS 2017: 1st day
Here I am going to give overview, what interesting happened on NIPS 2017 from my point of view.
Since I am computer vision guy mostly, my baseline is CVPR and I look for computer-vision applicable papers, as well as inspirational ones. E.g., not interesting in binarization and this kind of thing.
So, let`s go.
Opening talk about NIPS papers statistics was very interesting. Take-away message:
arXiv correlates with acceptance rate. The average acceptance rate is 21%. For papers posted online — 29%. For papers posted online and seen by reviewers — acceptance rate is 35%. Of course, it doesn`t automatically mean that arXiv influences. Might be that in average better papers tend to be posted.
Next cool thing was invited talk by John Platt: Energy Strategies to Decrease CO2 Emissions.
Messages:
- Solar and wind energy become much more expensive after reaching some share in economy. The reason is peak nature of them and that the more and more energy is wasted therefore.
2. That is why, Google is working with TAE on nuclear fusion energy. TAE does the experiments and Google is building ML models to model plasma behavior and to recommend next experiments to try. “Debugging plasma”, as John said :)
Afterwards, there was a poster session. We presented a poster “Working hard to know your neighbor’s margins: Local descriptor learning loss”
Everything was well, except that poster was too CV-ish and the most people were not familiar with task we solving. Probably, we should formulate more ML-style next time. Or submit to CVPR :)
Popular questions about paper:
Q: It is not fair to compare deep CNN vs light SIFT. Is there any graph, showing performance-per-FLOP?
A: Not yet, but we will do for journal version. Also, it is interesting to apply NASNet, e.g. neural-arch-search.
Q: Have you tried to replace first, computational-heavy layers with smth like DCT, which is already encoded in JPG?
A: No, but this is interesting idea.
Out neighbor were BicycleGAN authors with iPad-augmented poster :)
Unfortunately, not much time to explore because of our poster presentation. Nevertheless:
- “Incorporating Side Information by Adaptive Convolution” proposed a way, how to insert info like camera perspective into CNN.
- Easy way to give CNN possibility to say “don`t know”.
“Selective classification for Deep neural networks” Only for trained classes so far.
- Cool “neuroscience” paper — “Reconstructing perceived faces from brain activations with deep adversarial neural decoding”
- Have no idea, what is paper about, but poster is cool.When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
- Reviving fuzzy logic: BatchNorm parameters can be estimated analytically instead of learning. It turned out, that author is working with my supervisor at Tampere University. World is small :)
“Revisit Fuzzy Neural Network: Demystifying Batch Normalization and ReLU with Generalized Hamming Network”
- “A Bayesian Data Augmentation Approach for Learning Deep Models”. Explanation, how it works, wasn`t very clear, but sounds interesting enough to try.
So, it is the end of the 1st day. Stay tuned ;)