The Security Through Obscurity Moment of Large Language Models... and Money

March 19, 2023 :: 4 min read

OpenAI is on track to learn all the good security practices the hard way. Probably make a tone of money in the process too.

A couple of days ago, OpenAI posted a technical report on the latest iteration of their large language model (LLM) GPT-4. Despite being almost hundred pages long, it omits many crucial details. As the report puts it: “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”

In this post, we’re going to look into it from three angles: security, research, and money.

Security

Security through obscurity is a security paradigm that builds on the idea that secrecy of a mechanism makes it more secure. Simply put, if adversaries don’t know what security mechanism you’re using, and how it works, then you’re better off than if you did disclose it.

While it can be legitimately used to enhance the security of an already secure system, on its own it’s an anti-pattern. Additionally, one that has been heavily discouraged in the security community for years.

Look at this way, there are plenty of smart people whose only job is to poke holes into various systems, looking for vulnerabilities. Benign and malicious alike. If you don’t let any (or hardly any) researchers study your new invention, then the only group that will continue to investigate it, despite your hurdles, will be the adversaries. Any widely deployed security system, has been vetted for years by a global community.

Back to OpenAI. In the interview for the Verge they hinted that OpenAI provided certain academic and research institutions with access to its systems. Though it looks like it’s been mostly Stanford and Oxbridge, it isn’t clear who else, or what the criteria for selecting them were.

In other words, if you want to make sure that chat GPT-4 and other large models are indeed secure, you need to give more people access to it, not fewer.

Sadly, the vibe I’m getting both from the paper and their official communications is that they’ve been drinking their own Kool-Aid. No one but OpenAI and their direct collaborators understands the dangers of these models, and only they can save us from the imminent threat.

A locked gate with no fence around it
Restricting people from analysing LLMs is not going to make them any safer. Picture source.

Research

Security aside, another interesting thing about GPT-4 is that OpenAI decided not to release any information about the data, the training process or the model itself. How can the research community learn from their findings and ensure progress in science; you may ask. It cannot. How do you know the evaluation is robust? You don’t.

YoU cAn RuN yOuR oWn TeStS, sEb! Sure but since I don’t know what went into the training set, it’s likely that the test set is contaminated (as in, data leakage). We seem to have some evidence for it already. Twitter crowd has shown that GPT-4 does well on old Leetcode tasks but struggles with many new ones.

For the same reason, people that cry wolf that lawyers are done because LLMs can pass the bar tend to forget that the internet (aka LLM’s training data) contains a large amount of resources for preparing for such exams, including questions and answers.

Don’t get me wrong, I’m not saying that LLM can’t or won’t be able to ace exams, or write code. But rather that this whole shroud of mystery around GPT-4 undermines a lot of its otherwise impressive accomplishments. Can it dominate Leetcode’s leaderboard or is it trained on the numerous corresponding tutorials? Can it pass the bar or were the questions in its training data? So is it actually doing any better w.r.t. many standardised benchmarks? ¯\_(ツ)_/¯.

Money

I guess the gist of this whole fiasco is the fact it’s about money. As simple or reductive as it may be. OpenAI employs lots of accomplished researchers, and Microsoft made a huge bet on them too. Having a closed off model that you cannot download, replicate or study is going to cement OpenAI’s position as the leader of the LLM market (at least for now).

So if Sam Altman et al. say that it’s the best LLM on the market, even though they don’t disclose what data it was trained and evaluated on, you’d better trust them.

Now, there’s nothing wrong with that if you consider them a company that sells a product. Don’t like it, don’t buy it. Why many people are disappointed is because OpenAI was founded on the ethos of open research and benefiting the society. But conversely, their recent stance has been an opposite of this.

As usual, this is an opinionated take. If you’d like a broader view, I’d recommend this article.

More posts.