CS2125 Paper Review Form - Winter 2019 Reviewer: Ali Harakeh Paper Title: Efficient Neural Network Robustness Certification with General Activation Functions Author(s): Zhang, Weng, Chen, Hsieh, and Daniel 1) Is the paper technically correct? [X] Yes [ ] Mostly (minor flaws, but mostly solid) [ ] No 2) Originality [ ] Very good (very novel, trailblazing work) [X] Good [ ] Marginal (very incremental) [ ] Poor (little or nothing that is new) 3) Technical Depth [ ] Very good (comparable to best conference papers) [X] Good (comparable to typical conference papers) [ ] Marginal depth [ ] Little or no depth 4) Impact/Significance [ ] Very significant [X] Significant [ ] Marginal significance. [ ] Little or no significance. 5) Presentation [X] Very well written [ ] Generally well written [ ] Readable [ ] Needs considerable work [ ] Unacceptably bad 6) Overall Rating [ ] Strong accept (award quality) [X] Accept (high quality - would argue for acceptance) [ ] Weak Accept (borderline, but lean towards acceptance) [ ] Weak Reject (not sure why this paper was published) 7) Summary of the paper's main contribution and rationale for your recommendation. (1-2 paragraphs) This paper aims to give a non-trivial certified lower bound of minimum adversarial distortion for an input to a neural network. The main contribution of this paper is the extension of such bounds beyond ReLU activation functions through providing linear upper/lower bounds for general activation functions. In addition, the provided algorithm is efficient and scalable. 8) List 1-3 strengths of the paper. (1-2 sentences each, identified as S1, S2, S3.) S1: Crown is shown to be able to provide certified lower bounds of the minimum distortion for networks with activation functions other than the ReLU. S2: Crown allows to adaptively select the upper/lower bounds to reduce approximation errors. S3: Crown provides tighter lower bounds than the current state of the art. 9) List 1-3 weaknesses of the paper (1-2 sentences each, identified as W1, W2, W3.) W1: Crown provides certified lower bounds for specific input points. It is none-trivial to certify whole input spaces, like what AI2 does. W2: Crown assumes inputs are perturbed with a Lp ball. This might be valid for most of current adversarial attack methods but is nevertheless not generalizable to all types of input perturbations. W3: In my opinion, experimental results provided do not live up to the stated contributions. Relatively small networks with simplistic architectures were tested on MNIST and CIFAR, which does not support the claim of crown being a `generic’ certification framework. W4: Only fully connected layers are tested. Convnets with complicated operations cannot be certified with CROWN.