Fooling automated surveillance cameras: adversarial patches to attack person detection
WATCH THE VIDEO
Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by
0 nhận xét:
Đăng nhận xét