NIPS 2017: 1st day

Dmytro Mishkin
4 min readDec 7, 2017

--

Here I am going to give overview, what interesting happened on NIPS 2017 from my point of view.
Since I am computer vision guy mostly, my baseline is CVPR and I look for computer-vision applicable papers, as well as inspirational ones. E.g., not interesting in binarization and this kind of thing.
So, let`s go.

Huge lines (two hours to wait) are the most and only organizational problem at NIPS

Opening talk about NIPS papers statistics was very interesting. Take-away message:

arXiv correlates with acceptance rate. The average acceptance rate is 21%. For papers posted online — 29%. For papers posted online and seen by reviewers — acceptance rate is 35%. Of course, it doesn`t automatically mean that arXiv influences. Might be that in average better papers tend to be posted.

Next cool thing was invited talk by John Platt: Energy Strategies to Decrease CO2 Emissions.

Messages:

  1. Solar and wind energy become much more expensive after reaching some share in economy. The reason is peak nature of them and that the more and more energy is wasted therefore.

2. That is why, Google is working with TAE on nuclear fusion energy. TAE does the experiments and Google is building ML models to model plasma behavior and to recommend next experiments to try. “Debugging plasma”, as John said :)

So far experiments go well.

Afterwards, there was a poster session. We presented a poster “Working hard to know your neighbor’s margins: Local descriptor learning loss

Welphie near HardNet poster

Everything was well, except that poster was too CV-ish and the most people were not familiar with task we solving. Probably, we should formulate more ML-style next time. Or submit to CVPR :)

Popular questions about paper:
Q: It is not fair to compare deep CNN vs light SIFT. Is there any graph, showing performance-per-FLOP?
A: Not yet, but we will do for journal version. Also, it is interesting to apply NASNet, e.g. neural-arch-search.
Q: Have you tried to replace first, computational-heavy layers with smth like DCT, which is already encoded in JPG?
A: No, but this is interesting idea.

Out neighbor were BicycleGAN authors with iPad-augmented poster :)

iPosterGAN :)

Unfortunately, not much time to explore because of our poster presentation. Nevertheless:

So, it is the end of the 1st day. Stay tuned ;)

Upd.: Day 4

--

--

Dmytro Mishkin
Dmytro Mishkin

Written by Dmytro Mishkin

Computer Vision researcher and consultant. Co-founder of Ukrainian Research group “Szkocka”.

No responses yet