Win the 3rd place in ACM MM 2021 Robust Logo Detection Competition among 36489 participating teams.
Our paper about black-box adversarial attack and discretization problem is accepted by TDSC.
Our paper "Attack as Defense: Characterizing Adversarial Examples using Robustness" is accepted by ISSTA 2021.
Our paper "BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized Neural Networks" is accepted by CAV 2021. Congratulations to Yedi.
Win the 3rd place in CVPR2021 Security AI Challenger Track1:
White-box Adversarial Attacks on ML Defense Models. This competition is part of AML-CV Workshop at CVPR 2021.