Probably the trickiest part in here is the piece on adversarial learning - I think this will be very hard given current technology and will likely lead to a lot of failure in deployed AI if something much more robust is not realized. But, an interesting problem space for sure...
My background is in test automation (on user interfaces). Is it correct to say in ML the concept of test automation is irrelevant? Because during training and predicting the model is already being validated for accuracy and correctness.