Palo Alto Networks Inc.

20/08/2024 | News release | Archived content

Security Theater: Your AppSec Success Metrics Are Misleading

Welcome to Act 3 of our security theater blog series, where we seek to shed light on illusions in AppSec. In our previous post in this series, Security Theater: Who Cares About Your AppSec Findings?, we explored how to get the most from your AppSec findings. Today we look at how measurements of success can be misleading.

Act 3, Scene 1: The Metrics Game

Different types of metrics lend themselves to being gamified. After all, when held to a certain metric, human nature wants to tailor our performance to meet or exceed expectations. But what happens if the metrics used to measure success don't align with business goals? Could they sabotage business outcomes? In a word, yes.

Imagine you own a bakery, for example. You measure your bakers by how many pies they make per day, rather than how many pies they sell. Your bakers naturally focus on producing pies, regardless of sales, and you end up with unsold inventory. To prevent your bakery from eventually going under, you might want to pick a better metric, one that incentivizes the number of pies sold per day.

Pitfalls of Measuring Success in Cloud Security

We can all agree that how you measure the success of tools is essential to the security posture of your organization. Let's say, for instance, you activate a new security tool and immediately receive alerts about numerous vulnerabilities, misconfigurations and risks in your cloud environment. That's good, right?

Maybe.

Identifying security risks is critical, but a larger number of findings doesn't always equate with better security.

Zero in on the tool's results. Consider asking the following questions:

  • Do we have false positives?
  • Could we have false negatives and are missing vital information?
  • Do we have so many findings that we need a new team to address them?
  • Are the insights provided actionable?
  • How is risk prioritized?
  • Are the insights correlated?
  • Is business criticality identified?
  • Are we fixing issues at their source?

How you answer these questions is key to evaluating the success of the security tool, as each answer provides an insight into the tool's accuracy, efficiency and value.

Commonly Gamified Metrics

Total Alerts and Number of Issues Remediated

The idea of focusing on the total amount of findings or alerts and measuring success by the number remediated epitomizes what we've been calling security theater - something that might look good on the surface but is almost irrelevant in practice.

The rate of remediation is too easily gamed. Put yourself in the developers' shoes. If you measure their success by the number of issues fixed - when their job is to build and ship code as fast as possible - they're going to grab the low-hanging fruit and pass over complex issues.

Will that give you the most effective security resolution? Probably not, given that your developers are likely fixing code weaknesses in nonreachable functions or similar issues that don't ultimately matter. How much better it would be to measure the number of critical vulnerabilities or attack paths (interconnected risks) remediated.

Ask yourself, does this metric align with our business goals? Is it the best metric to help us achieve our business goals?

Mean Time to Remediate (MTTR)

MTTR is a commonly used success metric, but it too can be misleading. On average, it takes 145 hours to remediate an alert. Organizations must consider what defines a remediated or fixed issue. Security teams might define success by how quickly they can ship an alert off to a developer. Alert resolved, right?

But does your organization track whether the developer actually fixes the issue? Does it get immediate attention or is it added to the next sprint? Is a web application firewall (WAF) deployed or is the issue remediated in code? How do you track issues sent to developers but never resolved?

Keep in mind that development and security teams typically have distinct success metrics. For MTTR to be effective, it must involve both security and development to ensure that success is based on the total time from the alert to the actual fix.

Securing the OWASP Top 10

You should absolutely secure your applications against the OWASP Top 10 Security Risks. You shouldn't, however, try to retrofit the OWASP Top 10 into a success metric. First, OWASP could easily have composed a list of 100 security risks. Think of the Top 10, in other words, as a set of benchmarks (i.e,. CIS benchmarks), in that it's designed for automation.

And refer back to Security Theater Act 1 where we talked about compliance standards and how they tend to create a false sense of security. Approaching the OWASP Top 10 - or any guideline - with a checklist mentality is to mistake a guardrail for the goal.

When you evaluate a tool, ensure that it's highly accurate and combines insights accounting for multiple factors, including business criticality. Doing so will help you prioritize risk and focus remediation efforts on the most impactful issues. It will also allow you to base your organization's success on efficiently remediating high-priority risks.

Even better, aim to confidently answer this question: If I had only 15 minutes, what can I do that will have the greatest impact?

Act 3, Scene 2: What Defines a Fix?

Let's get this out of the way. A patch is not a remediation, nor does a WAF solve your underlying problem. Yes, both are effective security measures in the immediate. But neither solves the underlying issue with your application.

Make no mistake - how you define a fix or remediation is critical to successful AppSec.

You should not consider MTTR complete with the installation of a patch or WAF. You also shouldn't consider it complete once security teams open a Jira ticket and lob it over to a developer.

A fix should encompass a developer fixing the issue at its source, obtaining approval for the fix and redeploying the application.

Tracking MTTR effectively means tracking every stage.

  • Alert discovered
  • Security sends PR or ticket to developer
  • Developer sees PR
  • Developer fixes issue
  • Fix approved
  • Application redeployed

In those stages is your definition of MTTR. How long did it take? That's the metric to benchmark and track.

Act 3, Scene 3: Learning from Our Mistakes

Are we learning from these security tools and metrics? With new-found knowledge, teams should focus on preventing risk and educating developers about security.

Preventing risk from the beginning is the only way to stay on top of AppSec. Three key metrics to track that will amplify your AppSec program are:

  1. Are we shifting left and seeing fewer issues in runtime?
  2. When we find problems, where in the pipeline are we finding them?
  3. Are developers adopting IDE and VCS integrations to fix issues immediately at their source?

By focusing on these three, you can increasingly improve security outcomes for your organization.

Empowered by information, teams can determine if they're learning from errors and reducing the occurrence of certain mistakes. If, however, they see no improvement, consider designing a security champions program to improve future performance.

Act 3, Scene 4: Closing Remarks

Psychology tells us that when success metrics are known, teams will adjust their efforts to meet them. By aligning success metrics to AppSec and development, organizations can ensure their focus remains on what is effectively most impactful.

AppSec platform consolidation is at its tipping point. The future of AppSec lies in a platform that provides both AppSec and development teams with tools to accelerate security workflows.

- End of Act 3 -

Interlude: What's Next?

If you'd like to learn more about how Prisma Cloud accelerates AppSec workflows with Code to CloudTM context, join an upcoming shift-left bootcamp.

And check back as we explore more security theater. Act 4, coming soon.