So You Want to Protect Your Custom GPTs? Here’s What You Need to Know

Eduard Ruzga
3 min readDec 8, 2023

A New Frontier in AI Security: Prompt and ChatBot Hacking

Few weeks ago, I shared a discovery about a potential vulnerability in Custom GPTs. That video got some attention, and sparked a series of questions. It was long overdue to dive deeper and answer these questions. I did that in video below, but here I also do it a blog post form

Understanding the Vulnerability

Imagine this: You’re working on your Custom GPT.
Creating prompt, adding files, iterating on it. And next day after release you notice a copy on the market. How did this happen? Well you can just ask Custom GPT for its instructions and it will tell you them right away. There is whole GitHub repo full of leaked custom gpts prompts.

I call this a ‘vulnerability’, not just a technical glitch, but a real challenge for creators in the ChatGPT space.

The Heart of the Matter

What’s at stake here? In my view its success of CustomGPT store. Systems grow if there is good exchange in value. If someone invested resources to create value he needs to rewarded. If there is asymmetry between cost of creating something and cost of copying something that was created. Then there is high risk that one person invested while other banked on getting return on investment that was not his.
This is not good and decentivizes the investment.
That’s why we have patents and copyright law.

I do not think that they are good by the way. I prefer revenue share instead of monopoly. Those who created value should be rewarded for their contribution but they should not be granted monopolistic powers over it. But its hard to do right and we have what we have. For better or worse if it will be wild west without protection of investment then there will be no investment and OpenAI should care about it too if its serious about Custom GPTs store.

The Questions That Emerged

After releasing my video, I received a a dozen of inquiries, ranging from the technical aspects of accessing other GPTs using this vulnerability to the broader implications of AI security.

Can You Access Other GPTs?

Yes, you can, and it’s surprisingly simple. I demonstrated this with a few tests, revealing how files can be transferred from one chat to another, bridging the gap between different custom GPTs.

Conclusion

Be aware of these risks if you are investing time in to making a good Custom GPT. If all the value is in the instructions and knowledge files it will be relatively trivial for bad actors to copy your custom GPT.
Do add protective prompt as minimal investment.
Preferably hide your proprietary information behind external actions and API where you have more control over it.

Things you can try

Exploring trough hands on experience the topic of protecting CustomGPTs I created two bots.

First one is Can’t Hack This

This bot is a demonstration of a bot that resists spilling its own instructions. Its not 100% bullet proff but it will with some humor give you some resistance.

Second bot is GPT Shield

This one creates protective segments for other Custom GPTs. Just ask it.
If you give it your prompt it can make tailored version too. Again it does not give 100% protection. Does help to ward off low effort actors.

I have some thoughts on how to approach this better, it probably needs to be open sourced, community drive. With unit tests containing common prompt injections collection maintained by community. I just do not have time for that at the moment.

Join the Discussion

What do you think about the future of AI model protection and the balance between sharing and securing our digital creations? Drop your thoughts in the comments below, and let’s explore this fascinating topic together!

--

--

Eduard Ruzga

We make our world significant by the courage of our questions and by the depth of our answers — Carl Sagan