CS2125 Paper Review Form - Winter 2019 Reviewer: Nils Wenzler Paper Title: Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images Author(s): Anh Nguyen, Jason Yosinki, Jeff Clune 1) Is the paper technically correct? [X] Yes [ ] Mostly (minor flaws, but mostly solid) [ ] No 2) Originality [ ] Very good (very novel, trailblazing work) [X] Good [ ] Marginal (very incremental) [ ] Poor (little or nothing that is new) 3) Technical Depth [ ] Very good (comparable to best conference papers) [X] Good (comparable to typical conference papers) [ ] Marginal depth [ ] Little or no depth 4) Impact/Significance [ ] Very significant [X] Significant [ ] Marginal significance. [ ] Little or no significance. 5) Presentation [ ] Very well written [X] Generally well written [ ] Readable [ ] Needs considerable work [ ] Unacceptably bad 6) Overall Rating [ ] Strong accept (award quality) [X] Accept (high quality - would argue for acceptance) [ ] Weak Accept (borderline, but lean towards acceptance) [ ] Weak Reject (not sure why this paper was published) 7) Summary of the paper's main contribution and rationale for your recommendation. (1-2 paragraphs) In their paper, Nguyen et al. present a new perspective to fooling deep neural networks. While neural networks have been fooled to think that real pictures are something else before (a picture of a school bus being classified as an austirch), they show that abstract pictures that do not represent real objects can be classified as being real objects by the DNN nonetheless. Their research is as they admit as well, highly experimental. This is supported by the fact, that they originally intended to not fool neural networks but to implement a visualization tool for DNNs. For fooling the neural networks, they used evolutionary algorithms on a pixel level and for so called compositional pattern-producing neural networks. One of the key takeaways are suggestive findings of how neural networks nowadays DNNs "see". The other is the fact that DNNs have a way lower intrinsic dimensionality then their input space dimensionality. 8) List 1-3 strengths of the paper. (1-2 sentences each, identified as S1, S2, S3.) S1: The paper introduced a new kind of attack on neural networks that is totally independend of earlier known gradient ascend attacks. S2: The paper was open about being to some extend "failed research". S3: They are to a significant amount honest about being suggestive. They often don't claim generalizatability. 9) List 1-3 weaknesses of the paper (1-2 sentences each, identified as W1, W2, W3.) W1: The paper is very suggestive in a lot of their findings. It is unclear whether their results are valid or even generalizable. W2: Their results are suggestive. That is an intrinsic problem to a lot of DNN visualization approaches. W3: The paper is very broad and suggests a lot of different things. It could have been scoped to a smaller setting and offer more funded findings.