Managing False Positives in OWASP Zed Attack Proxy (ZAP)
Background
Last month, while I was interning at a local company, my team was working on a project that involves the use of OWASP Zap, in particular — the ZAP Baseline Scan with Docker. Our use case was to run ZAP in an automated fashion to highlight key findings and vulnerabilities in our websites and web applications. ZAP Baseline Scan offers that capability and reports it in a well documented fashion (in JSON, HTML and MD reports).
Problem
Upon running the scan and retrieving the results, there were some alerts that could be false positives and it begs the question if there was a way to avoid recording these false positives in the reports generated. However, the feature to remove the false positives from the reports was not readily available as highlighted in this GitHub issue. The false positives were supposed to be easy to remove according to their documentation on the Baseline Scan. As documented, we could easily ignore some of the alerts based on their alert reference IDs or specifying URL regex patterns by editing the configuration file and running it with the -c
parameter of the baseline scan command (as shown below).
Upon trying it out ourselves, we realise that we encounter the same issue raised by the GitHub user (who raised the issue initially). One member also replied to the issue stating that it is a known restriction and it may require some scripts (and probably more time) to iron things out. Thus this prompts us to come up with temporary solutions in the short term.
Exploration
One idea explored was through the use of the scan hooks. It is a feature that allow us to modify and override some of the behaviour of the baseline scan script components. This was a little tedious as there was a need to deeply understand and explore how the baseline scan script runs.
The first method that we tried was to use the scan hooks and disable the alert reference IDs. In the below code, we excluded alert reference IDs 10096
and 10027
which are alerts for Timestamp Disclosure and Information Disclosure — Suspicious Comments respectively.
def zap_started(zap, target):
scanners = zap.pscan.scanners
# disable scanners
ids = '10096,10027'
zap.pscan.disable_scanners(ids)
return zap
However, this brought along additional issues:
- When a particular alert reference ID is disabled, it is disabled from all the reports and results, which may be undesirable as it can exclude certain important key findings for a certain URL with that alert.
- If we only want to exclude a certain URL or URL path for a particular alert reference ID, we could not do that with this function.
In order to address the second issue, we looked through the baseline scan script once more and realised that the spidering was involved in looking for URLs and scanning them to find alerts. Thus our naive solution was to simply exclude the URL from the spidering (code below).
def zap_spider(zap, target):
zap.spider.exclude_from_scan("http://example.com")
return zap, target
However we still run into the same issue which was that if we exclude the URL, it avoids scanning all alerts for that URL, meaning we may exclude certain important key findings as well. In order to solve this, more testings with the various scan hook functions have to be done, as there was multiple places to modify the behaviour of the script to exclude our intended false positives from the reports generated.
Solution
After much testing, we found this function zap_get_alerts
in the baseline script where it extracts the alerts that are stored in the Zap object and returns the list of alerts for further processing in the baseline script. Upon diving deeper into the Zap source code, we found that the alerts’ confidence level can be set to 0
to indicate a False Positive, 1
for Low, 2
for Medium and 3
for High.
The following scan hook function runs at the start of the zap_get_alerts
function in the baseline script and thus extracts the alerts first (similar to the original function) and updates the alert confidence level of the alerts that we want to exclude from the reports and results.
def zap_get_alerts(zap, baseurl, denylist, out_of_scope_dict):
st = 0
pg = 5000 false_positives = [(10096, 'http://example.com')]
alerts = zap.core.alerts(baseurl=baseurl, start=st, count=pg)
while len(alerts) > 0:
for alert in alerts:
alert_id = alert.get('id')
url = alert.get('url')
plugin_id = alert.get('pluginId')
for fp in false_positives:
if plugin_id == fp[0] and url == fp[1]:
zap.alert.update_alerts_confidence(alert_id, '0')
st += pg
alerts = zap.core.alerts(start=st, count=pg)
The false_positives
list can be updated to include more alerts and URLs to exclude, allowing greater flexibility of the baseline script.
Although it is understandable that it may affect the efficiency of the scan as it has to process all the alerts twice, it is almost inevitable due to the way the scan hook function is handled by the baseline scan script — only able to run before or after the original function, not allowing us to replace us the function. Thus both the original function and scan hook function looks almost the same as they have to process the alerts in order to update the confidence level.
Final Thoughts
I understand that this might not be the best solution in overcoming the current problem but it is a temporary solution that I could come up with and it may be a good solution (at least until the developers for ZAP baseline scan fixes this issue).
Even if this solution is not good for your particular use case, I hope you have learnt a thing or two on how scan hooks work and how you can use it. It was tough testing on that aspect as well given their sparse examples on scan hooks as documented here.
Overall, it was difficult navigating through this entire process as the documentation on ZAP was not very specific and not very thorough — especially for the baseline scan. There was a lot of source code reading and analysis. But we have learnt tonnes on OWASP ZAP and its features (active scan, passive scan, contexts) along the way and it is definitely a good tool to maintain security standards and highlight key findings and vulnerabilities. I look forward to seeing its improved versions and using them in the future.
If you like this article, do give me a few claps and you can always drop a comment if you have any questions, I will be glad to answer them. Thank you for reading!