Whose Model Is It Anyway?

February 10, 2021 :: 5 min read

You gathered the data, sanitised it and trained your fancy model. Your clients love it. Can you prove that it's yours?

The answer to this question is sort of like the new iPhone — it’s the same but different, but still the same; you can technically prove that you own the model but most likely no one cares. But maybe you can get around this. Bear with me.

The problem

We all know that machine learning products are expensive to create. Not just in terms of the aforementioned data and its labeling, but also in terms of engineering man-days, GPU hours, and infra costs. BERT costs almost 7,000 USD to train, XLNet roughly 250,000 and GPT-3, supposedly, over 4.5 mil. We are talking about computing time alone here. Model ownership is something that comes up a lot when we talk to the industry partners in our research group. There’s a good reason for that — machine learning based products (and adjacent areas) are a multi-billion dollar industry.

One way people can steal your model is through the so-called white-box access. They just get a physical copy of it. Maybe they hacked your system, bribed your employee to take it outside, whatever, they have a 1-to-1 copy.

Alternatively, an attacker can obtain your model in a black-box fashion i.e. just by interacting with it — sending inputs and getting back predictions. Think of e.g. submitting a picture to Google Photos or Facebook and having them tagged for you with people, places or dishes. Shazam is another good example — you let the app listen to the background melody or your humming and get back possible songs. Bottom line, only input-output interactions.

With attacks proposed to date, attackers can steal all kinds of models, ranging from simple classifiers to state-of-the-art image recognition and language models. While there are some defences, ultimately, they merely slow down the attacker.

The solution

… or actually a solution. You’re most likely familiar with digital watermarking: those overlays on stock photos, logos in videos, digital signatures, steganographic fingerprints. It turns out you can do the same thing for deep neural networks, both in the white and black box scenarios. Most of these watermarking schemes come with an acceptable verification process too. There are some caveats, sure, but taking them at their best and keeping in mind that there are years of improvements to come, technically, you can prove that someone stole your model if they expose it online e.g. to undercut your business.

MR PORTER, watemark
Logo watermark in the top-right corner. Picture source.

Alright then, you’ve just found out that someone is reselling your model. The watermark matches. Now what? Do you send them a cease and desist letter? Do you outright sue them? Based on what? This is where we need to diverge a little.

Recall that in the white-box case, an attacker is obtaining the actual copy of your model in the form of source code or a binary. This is quite similar to unlicensed redistribution of software. It sets them up for a copyright infringement, and potentially, patent violation if you hold any.

It gets more tricky when we consider the black-box case. It’s important to emphasize again that an attacker is not spying on your internal processes — they just observe the input-output pairs and attempt to recreate the service to the best of their ability. The only practical difference between a benign and malicious user is that one of them decided to train their own version of your model. It’s like going to KFC for 10 years to copy the spice mix by the mouth feel alone.

It certainly isn’t a patent violation since an attacker doesn’t necessarily use any of your internal logic. What is more, if you do have any patents, they most likely correspond to specific algorithms or modules in the network (such as Google’s dropout patent).

Copyright infringement, is hardly convincing for similar reasons — no actual copy is created and no lines of code are discovered. You could argue that it’s a case of reverse engineering (still tricky both in the US and the EU) but there is no destructuring, disassembling or decompilation involved which are usually required to consider reverse engineering in the first place. So on the surface level, the stolen model is just a competing product. To be fair, there are attacks that might fall under reverse engineering since they attempt to obtain virtually identical models (e.g. this) but most focus on duplicating the functionality alone.

I’ve reached out to my colleagues that deal with IP/patent law in tech., as well as looked for suits and cases that could serve as examples for this post but to no avail (if you do know a case like this, please let me know). What I did get were the usual cases — former employee stole my trade secret, competitor is violating my patents (infamous Apple vs Samsung case), EULA violations (Adobe Systems suing One Stop Micro for not adhering to the reseller agreement); all of which are more suitable for the white-box scenario.

The alternative

Let me preface this by saying that I’m not a lawyer and you’d consult one in the jurisdiction of your business.

The starting point for this is going to be the End-user License Agreement (EULA). Consider e.g. JetBrains or Unity products. You can have a free community or hobby edition and you agree that you’re only going to use it for personal projects or education etc. but not for commercial purposes. If they find out, they can go after you (based on your violation of the terms of use which you had agreed upon). Note that violation of these terms is separate from the copyright infringement and/or patent violation — in this case, you aren’t using the product for the purposes described in your license agreement.

Back to your model, your users would explicitly agree that they’re using your service only for its intended purpose and not to create similar models/products, let alone competing with yours. Furthermore, they acknowledge that their user account will be linked to a watermark verification process that is used to identify potential violators of said license/term of service, and if a suspected model matches their watermark they may get sued.

Based on what we established earlier about copyright infringement, I think it would be difficult to argue for it in case of black-box attacks even with a working watermarking scheme (at least until there is some precedence for it), and EULA violation is your best bet.

Whether digital watermarking is legally sound is an open question on its own. Not only does it require a robust verification process but also knowledgeable courts and technical experts that can assess the security of the watermarking scheme.

Conclusion

In this post we discussed what can be done about model stealing technically and legally, and why you might be better off if you think of it as a software licensing problem. Some asterisks still apply (say local law variations and precedence) but I hope that this gives you some context and a mental model of what needs to be considered when you decide to protect your machine learning assets. If you want to read more about digital watermarks and their use as legal evidence, this article covers some interesting topics.

More posts.