TL / DR: if there is money involved, then you need a way to verify your webhook
proceedings. Your security keys, however, are probably fine,
although in the end that is a commercial decision that you must make.
Endpoint of private webhook
Your concerns about a private webhook endpoint are quite valid. In fact, this is one of the biggest dangers for webhook endpoints: you should verify that the person calling your endpoint is the person who is supposed to call the endpoint, and not an attacker. It is especially critical for online orders or anything that costs money. Some systems verify the call by providing an API call in which you return the request you received to the sender intended for verification (also known as a triple handshake), and some make it with cryptographic signatures. There are probably more ways to verify it, but verification is very important.
Obviously, you can and should keep your webhook endpoint a secret, but this is more like a "security through the dark" step, and neither is a super secure one. After all, any developer who has access to the code base can find out what the end point of the webhook is. It is also in trouble if its application code becomes public or accessible to an attacker, or if one of its own developers becomes an attacker. Since half of all data breaches begin internally, this is a very real concern.
Unfortunately, there is nothing you can do unless the provider gives you options. I would work hard with them to see if they have any options for webhook verification.
Filtered Purchase Key
Filtered purchase keys are dangerous but also more repairable. You must ensure that your keys are long enough so that they cannot be easily forced by brute force. That has the disadvantage that it makes it difficult to write in an application, so you could try to find a solution where users can enter the key in the application without having to
Write it down (QR codes, links, etc.).
Of course, email itself is not always the safest channel, but for most companies it is "secure enough." Inventing a new delivery mechanism for safer code delivery will likely cause more problems than you will solve.
Fortunately, the worst case scenario of a filtered application key is probably not too bad. After all, someone really bought the key, so if an attacker manages to intercept and use a single-use case key, the original buyer will probably contact you asking, "Why doesn't my key work ?! ". Assuming they can provide proof of purchase, it will be simple enough to revoke access to the application for the current user and grant it to the new user.
This creates the risk of someone using social engineering tactics to trick their staff into granting them the subscription of another person, and also runs the risk of grumpy customers. However, since the marginal cost to you sounds low (that is, it is relatively cheap for you to give someone else access to the system), the loss is low even if someone occasionally ends up with a free subscription.
That was a bit wandering, but the most important part is just the thought process. Security is not binary. It is not safe / insecure. It's just "safe enough." The more effort you put into security measures, the more likely you are to cause problems for your legitimate clients, and at some point it is more profitable to stop adding security layers and accept that an attacker can occasionally steal a subscription or two. As a result, the trick is not to make things safe, but safe enough.