And that is where responsible disclosure (RD) comes in.
The principal objective of RD is to define your policy for receiving and managing identified vulnerabilities from researchers in a transparent, practical and collaborative way. The desired outcome is for both parties to work together to minimise the potential for any harm borne from the vulnerability. Timings for communication should be agreed to allow for maximum reduction of risk and impact (perhaps with a bit of reputation management and PR thrown in for good measure) and both parties need to have a clear understanding of what success looks like – including any financial or other reward for confirmed vulnerabilities disclosed through the programme.
There is no particular standard for defining your approach to RD or even how actively you promote it. Incentive schemes such as bug bounty programmes can be useful, but can also present different challenges, including increased workloads driven in some cases by lucrative financial incentives. Organisations such as the UK’s National Cyber Security Centre now provide useful toolkits to get you started.
However, it is important to recognise that although setting out a policy and process for RD is a practical step to take, the circumstances of any disclosure can vary greatly. Be prepared to operate outside of the process depending on the circumstances, as long as you reach the desired and published “good” outcome at the end.
When a disclosure is made, the initial triaging of that information helps to determine the next steps. The objective here is about reducing harm and managing risk, but sometimes there can be difficult judgement calls to make. Declaring a vulnerability before a patch is available can cause angst amongst customers, but the counterbalance is that you can have a transparent conversation about workarounds and mitigation.
If the vulnerability relates to a safety-critical asset, any disclosure without first having a patch could be high risk, especially if the exploitability potential of the vulnerability is high.
Similarly, there can be moral and ethical issues to debate. What if a vulnerability is big enough to transcend multiple vendors, products and end-users? In this case, should you go public quickly so that the problem can be worked through as a collaborative effort, even though that could cast a shadow on your own organisation? What if this vulnerability is in the public domain already? What if the disclosing party has already communicated outside of your RD process?
You risk reputational damage if it transpires that they had warned you up front and perceptively you did nothing about it. A pre-emptive communication may alleviate that risk, or it might create another. Every situation is different, but at least you’ll not be coming at the problem cold if you’ve planned ahead.
Having a clear stance on RD drives transparency and demonstrates commitment to the cause. Each party will know up-front how to engage and where they stand in terms of potential outcomes. Planning what you will do before the circumstance arises is a practical step, but must account for uncertainty, recognising that no two scenarios are the same but are likely to present a different set of moral, ethical and logistical challenges to overcome.