Most organizations that have operated a SIEM for more than a year have accumulated a rule set that nobody fully understands. Rules were created by former employees, imported from vendor content packs, or written in haste during incident response. Some fire constantly and are ignored. Others have not triggered in months and may no longer function against current log formats. The result is a detection posture built on assumptions that have never been validated.
Auditing your existing SIEM rules is the essential first step toward detection engineering maturity. It reveals what you actually detect, what you think you detect but do not, and where your analysts are wasting time on noise. This guide provides a structured framework for conducting that audit.
Building a Rule Inventory
The audit begins with a complete inventory of every enabled detection rule in your SIEM. This sounds straightforward, but in practice most organizations discover that their rule count is significantly higher or lower than expected. Export every rule -- including its name, description, severity, data source dependencies, creation date, last modification date, and current enabled/disabled status. If your SIEM supports it, also export alert volume statistics per rule for the past 90 days.
Categorize each rule along several dimensions. First, origin: was this rule created in-house, imported from a vendor content pack, or contributed by a managed security service provider? Rules from different origins tend to have different quality characteristics. Vendor-default rules are often broad and noisy because they are designed to work across all customer environments. In-house rules may be more precise but less documented. MSSP rules may reference data sources that are no longer ingested.
Second, categorize by purpose: is this rule intended to detect a specific attack technique, satisfy a compliance requirement, monitor operational health, or flag policy violations? This categorization is critical because each purpose has different quality criteria. A compliance-driven rule may need to alert on every instance of a specific event regardless of false positive rate, while a threat detection rule must maintain a high true positive rate to be operationally useful.
Coverage Mapping and Gap Identification
With your inventory complete, the next step is mapping your rules to the MITRE ATT&CK framework. For each rule, identify which ATT&CK technique or sub-technique it is designed to detect. Many rules will not map cleanly -- this is expected and is itself a finding. A rule that cannot be mapped to a specific adversary behavior may be overly broad, imprecisely scoped, or no longer relevant.
Once your rules are mapped, generate a coverage heat map. Overlay your rule coverage against the ATT&CK techniques that are most relevant to your threat landscape. Use threat intelligence about the adversary groups that target your industry to prioritize which techniques matter most. A financial services organization should weight techniques used by financially motivated threat actors (credential access, lateral movement, data exfiltration) more heavily than techniques associated with state-sponsored espionage groups, unless specific intelligence suggests otherwise.
The gaps this mapping reveals are as valuable as the coverage it confirms. If you have twelve rules detecting initial access techniques but zero rules detecting defense evasion or credential access, you have a detection blind spot that an adversary who achieves initial access can exploit freely. Document these gaps as prioritized development targets for your detection engineering program.
False Positive Analysis and Threshold Validation
Pull alert data for every rule over a representative time period -- ideally 90 days, but at minimum 30. For each rule, calculate the true positive rate: of all alerts this rule generated, what percentage represented genuine security-relevant activity that warranted investigation or response? If your SOC tracks alert disposition (true positive, false positive, benign true positive), this data may already be available. If not, sample at least 30 alerts per high-volume rule and classify them manually.
Rules with true positive rates below 50 percent are candidates for immediate tuning or retirement. Rules below 20 percent are actively harming your SOC by consuming analyst time on noise. For each low-performing rule, determine whether the false positives follow a pattern. Are they caused by a specific system, user, or scheduled process? If so, the rule can likely be tuned with exclusions or threshold adjustments. If the false positives are unpatterned and inherent to the rule's logic, the rule may need to be rewritten or replaced.
Threshold validation is a critical subset of this analysis. Many SIEM rules use static thresholds -- for example, alerting on more than five failed login attempts in ten minutes. These thresholds are often guesses set during initial deployment and never revisited. Pull the underlying event data and examine the actual distribution. If your environment averages fifty failed login attempts per hour from service accounts alone, a threshold of five is generating nothing but noise. Recalibrate thresholds based on observed baselines, adding a margin that captures genuine anomalies without flagging normal variation.
Establishing Retirement Criteria and Ongoing Governance
A rule audit is only valuable if it leads to action, and the most impactful action is often removing rules rather than adding them. Establish explicit retirement criteria: a rule should be retired if its data source is no longer ingested, if the vulnerability or technique it detects has been fully mitigated by other controls, if it has not generated a true positive in a defined period (typically six to twelve months), or if its true positive rate remains below acceptable thresholds after tuning attempts.
Retirement does not mean deletion. Archive retired rules with documentation explaining why they were removed, so future analysts understand the decision and can reinstate them if conditions change. Maintain a retirement log that is reviewed quarterly to ensure the decisions still hold.
Finally, use the audit findings to establish ongoing governance for your detection library. Define a review cadence -- quarterly is a reasonable starting point -- where every rule is re-evaluated against current alert volume, true positive rate, and relevance. Assign rule ownership so that each detection has a named individual responsible for its performance. Require that new rules pass through a defined quality gate before production deployment, including documented hypothesis, tested against known-good and known-bad data, and peer review. These governance practices ensure that the audit is the beginning of a sustainable program rather than a one-time cleanup exercise.
