Curious about vulnerability disclosure? We explain what it is, why there may be friction between the researcher and the organization, and possible solutions.
What Is a Vulnerability Disclosure?
During a vulnerability disclosure, individuals report security weaknesses in computer systems to the organization. Disclosures can be contentious; some organizations prefer not to disclose weaknesses publicly until they are remediated, while sometimes the researcher prefers the organization makes flaws public sooner.
How Vulnerability Disclosure Works
Hackers who discover a vulnerability in a product or system often choose to disclose it to organizations before disclosing it publicly. Disclosure gives organizations a chance to patch the vulnerability privately before bad actors can exploit it.
Researchers, programmers, and security professionals are often the ones who discover flaws within a system. Once third parties find a vulnerability, the researcher attempts to contact organizations to notify them of the issue. An organization with a vulnerability disclosure program (VDP) can prioritize the vulnerability and document the process for faster remediation.
Once the researcher identifies a way to submit a vulnerability, they create a detailed report of their discovery. This report outlines the following:
- Details that describe the vulnerability and its impact
- Screenshots, code-snippets, and additional evidence of the vulnerability
- Proof of concept details that allow the vendor to replicate the vulnerability
- Any additional material that helps the organization understand the vulnerability
Cross-site Scripting (XSS), Improper Access Control, and SQL injections are the most common vulnerabilities researchers find and disclose. Occasionally, researchers submit zero-day vulnerabilities to vendors. Zero-day vulnerabilities are especially impactful because there are no patches or steps for remediation in place. If left unpatched, this could lead to widespread exploitation that leaves customers unknowingly vulnerable.
While incident response teams or developers can fix simple bugs from the first report, more complex vulnerabilities require ongoing communication. Clear communication helps keep each party accountable during remediation and aid in ensuring patches work as intended during retesting. Triaging and patching can take time, especially in larger environments. Longer response and resolution times require ongoing communication and clarification between the researcher and the response team.
Due to the complexity and sensitivity of some vulnerabilities, researchers can use multiple types of disclosure methods for guidance.
Types of Disclosure
Full disclosure is the practice of publishing information regarding vulnerabilities as early as possible in a public setting. This method informs everyone equally about the nature of the threat and requires companies to pressure developers into fixing the bug before bad actors can exploit it. The security community may spread details of the vulnerability through social media, academic papers, or security conferences.
- Full disclosure could force organizations to take action if they do not respond to private disclosure attempts.
- Public disclosure requires third parties to triage and develop their own workarounds as quickly as possible.
- Researchers post information in a public setting, giving bad actors the same amount of time to exploit the vulnerability as those who need to patch it.
- Pressure from customers creates stress for the vendor, leading to many emails, phone calls, and support tickets.
- Publicly disclosed vulnerabilities often hurt the brand’s reputation, making the company look less secure to customers, stakeholders, and partners.
In a private disclosure model, researchers disclose the bug directly to the vendor. The vendor determines when they make the vulnerability public, if at all. This model protects the organization from public repercussions, but the organization may never fix the vulnerability. The problem with this method is that if the vendor fails to respond or correct the issue, the details will never be public. Researchers can grow frustrated with a vendor’s response, often leading the researcher to full disclosure.
- Vendors have complete control if a vulnerability goes public.
- Bad actors are less likely to exploit the vulnerability.
- Organizations can take too long to develop a patch or ignore disclosure requests, leaving users vulnerable longer than necessary.
- A failed private disclosure that leads to full disclosure can damage customer trust.
Coordinated disclosure, also known as responsible disclosure, is when researchers agree to share vulnerabilities with a coordinating authority such as CISA, who then reports them to the vendor. The coordinator is responsible for tracking fixes, mitigating risk, and informing the public. Often the coordinating authority is the vendor.
Coordinated disclosure keeps the vulnerability private while vendors work on a solution. In some cases, the researchers and vendor set a deadline to disclose the bug publicly, providing a balance between private and full disclosure.
- Coordination gives vendors a chance to fix bugs without risking mass exploitation.
- Vendors can inform the public and present steps for remediation.
- Developers aren’t under additional pressure while working on the fix.
- Customers feel more confident that the vendor is transparent and has a solution.
- Lapses in communication may lead to longer vulnerability response and remediation times.
- Like private disclosure, lapses in communication leave users vulnerable for longer.
Vendors and researchers should keep best practices in mind when dealing with vulnerabilities to reduce the friction between researchers and security teams.
Researchers should attempt to contact an organization first and only use full disclosure as a last resort. Coordinated disclosure gives vendors a fair chance to patch vulnerabilities and promptly present a solution to their users.
Organizations can encourage coordinated disclosures by creating a Vulnerability Disclosure Program (VDP). A VDP offers clear guidelines for how researchers can notify organizations of a vulnerability. A VDP is a set of clear guidelines that outline how and where researchers submit vulnerabilities.
Organizations should publicly disclose the vulnerability in a timely manner when they develop a patch. Public transparency fosters trust among users while providing clear steps to remediate the problem.
Researchers should do the following:
- Be detailed, professional, and precise in their disclosure report.
- Adhere to any rules outlined by the organization’s VDP.
- Never exploit the vulnerability without written permission from the vendor
- Allow the vendor a reasonable time to respond and develop a patch
Vendors should do the following:
- Prioritize security by making an effort to resolve bugs transparently and promptly
- Give the researcher public recognition for their time and effort
- Offer financial rewards when appropriate
- Provide a safe harbor for researchers to share vulnerabilities without repercussion
How Vulnerability Disclosure Programs Help
VDPs provide controlled and collaborative environments where researchers and developers follow set guidelines to solve security issues together. These guidelines create a clear path for communication and remediation. Having a public-facing VDP signals to security researchers, customers, and investors that your organization takes security seriously because you are providing a direct platform for first disclosure.
There are five essential parts to a successful VDP:
Promise Statement: The opening statement explains why the VDP is important to your organization and helps demonstrate your commitment to continuous security.
Scope: Scope defines the products, properties, and types of vulnerabilities that are eligible for submission. Scope helps guide researchers where they should focus their attention.
Safe Harbor: Safe harbor assures researchers that they won’t have legal action taken against them for the vulnerabilities they discover.
Process: The process section outlines how researchers submit reports and what information the vendor requires.
Evaluation: The evaluation component details how the security team will evaluate reports, including average response time, how severity impacts remediation times, and whether or not the researcher can expect a confirmation email upon patching.