Yes you can, that's part of the appeal of applying machine learning to security. They don't rely on things like signatures or existing heuristics to identify things as malicious.
Think of it like your body. It learns to identify viruses. Does that mean you're immune from novel viruses or new strains of the flu?
I don't think this is a meaningful distinction. Who cares whether the new heuristic is being added by a machine or a human?
You still need to keep feeding the neural network data to learn from, and it will still choke when it sees novel data that doesn't align with the heuristics it developed.
That's the entire reason adversarial AI works. The reason the Trippy T-shirt makes you invisible to some current AI systems is because it exploits the heuristics they've built using data that these systems are unfamiliar with and haven't learned to process yet. If it was possible to build an AI system that could defend against novel attacks, the Trippy T-Shirt wouldn't be able to fool them.
> Machine learning only learns how to categorize things into predetermined categories.
This is just one type of machine learning called classification, there are others like regression and clustering which can be combined to create more robust models. Look at the technology behind Cylance's product which identifies files as malicious or not pre-execution. They are not just using classification.