IBM came up with a watermark for neural networks

AI models are complex and take time and compute power to generate – meaning they're expensive and valuable. IBM has been developing a way to "watermark" deep learning models by embedding specific information in to the model during training such that it's impossible (or very hard) to remove later, allowing definitive identification of model theft.
How can you tell if someone stole your AI models? IBM proposes a watermarking technique to protect AI developers and their intellectual property.