Why Out-of-the-Box SIEM Rules Fail
Every security team receives the same promise: deploy our SIEM with our default detection rules and you will catch threats. The reality is far different. Within weeks of deploying an off-the-shelf SIEM with default rules, teams face an avalanche of alerts. Most are false positives. Some are environment noise that security teams spend weeks tuning away. A few might be real threats, buried in thousands of irrelevant notifications.
This phenomenon, known as alert fatigue, creates a dangerous paradox. The more alerts you receive, the less likely you are to investigate each one. Eventually, analysts begin dismissing alerts reflexively, creating the very blindspot that attackers exploit.
Three fundamental problems cause out-of-the-box rules to fail:
- False Positive Rates Exceed Utility: Default rules prioritize sensitivity over specificity. They catch many true attacks but also trigger on thousands of benign activities. A rule that generates 10,000 false positives per week to catch one real attack provides zero operational value.
- Rules Don't Reflect Your Environment: A detection rule written for a Windows-heavy enterprise may trigger constantly in your Linux cloud infrastructure. Default rules assume certain baselines about tool usage, normal user behavior, and network architecture that don't apply to your specific deployment.
- Rules Target Generic TTPs, Not Your Threat Landscape: If your organization has never been targeted by data exfiltration attacks but faces constant credential harvesting attempts, your detection investment should reflect this reality. Default rules spread resources thin across the entire MITRE ATT&CK framework.
Building high-fidelity detection rules requires moving beyond point solutions. You need a systematic approach to rule engineering that accounts for your environment, your threat landscape, and the operational constraints of your security team.
Mapping Detection Rules to MITRE ATT&CK Techniques
The MITRE ATT&CK framework provides a common language for describing adversary behavior. When you map detection rules to specific techniques, you create transparency and accountability. You can answer critical questions: Which techniques are we detecting? Which techniques represent gaps? Where are our highest-priority investments?
MITRE ATT&CK uses a hierarchical structure of tactics and techniques. A tactic represents an adversary's goal at a particular stage of an attack. A technique represents a specific method for achieving that goal. Techniques often have subtechniques representing more granular variations.
Practical Examples with T-Codes:
| Technique Code | Technique Name | Tactic | Detection Focus |
|---|---|---|---|
T1003 |
OS Credential Dumping | Credential Access | Process execution patterns (lsass.exe access, secretsdump.py, mimikatz signatures) |
T1021.002 |
Remote Services: SMB/Windows Admin Shares | Lateral Movement | PsExec execution, SMB traffic patterns, service creation events |
T1059.001 |
Command and Scripting Interpreter: PowerShell | Execution | PowerShell process creation, encoded commands, suspicious parameters |
T1558.003 |
Steal or Forge Kerberos Tickets: Kerberoasting | Credential Access | TGS-REQ requests for service accounts, unusual ticket requests |
The key is establishing a bidirectional mapping: for each technique you want to detect, identify the observable artifacts and data sources that indicate the technique in use. Conversely, when you write a detection rule, tag it with the relevant T-codes so you understand your coverage.
The Anatomy of a High-Fidelity Detection Rule
A high-fidelity detection rule combines multiple components in a carefully balanced way. Think of it as a recipe: get the proportions wrong and the result fails. Too many conditions and you miss real attacks. Too few and you drown in false positives.
Five Essential Components:
1. Data Sources
Your rule can only detect what you collect. Before writing rules, inventory your data sources. Do you have Windows Event logs? Sysmon data? Network flow logs? EDR telemetry? DnsQueryLog events? This determines what you can realistically detect.
2. Logic Operators and Conditions
The core logic of the rule combines conditions with AND, OR, and NOT operators. Most high-fidelity rules use AND operators extensively to narrow scope. A rule searching for "powershell.exe" is too broad. A rule searching for "powershell.exe AND encoded command AND network connection" is considerably more specific.
3. Thresholds
Thresholds convert raw events into meaningful alerts. Rather than triggering on every occurrence, a threshold rule fires when a condition occurs N times within a time window. For example: "Alert if the same user account fails to authenticate more than 5 times in 5 minutes" (detecting brute force attempts).
4. Time Windows
Time windows define the observation period for your rule. A 5-minute window is appropriate for detecting rapid attack actions like brute force. A 24-hour window suits detecting gradual data exfiltration patterns. Choose time windows based on adversary behavior, not arbitrary preferences.
5. Enrichment and Context
High-fidelity rules correlate multiple data sources and enrich results with context. A rule detecting suspicious PowerShell execution gains fidelity if it correlates with recent failed logon attempts on the same host, or if it identifies the source as an internal IP versus external. This additional context reduces false positives.
Sigma Rules as a Universal Detection Format
Writing detection rules natively for each SIEM platform creates maintenance nightmares. When you change SIEMs, your entire detection library becomes a legacy asset. Sigma solves this problem by providing a vendor-agnostic rule format.
Sigma is an open standard for writing detection rules as YAML. A single Sigma rule can be compiled into detection logic for Splunk, Elasticsearch, Sigma-compatible SIEMs, and dozens of other platforms. This decouples your detection logic from your tooling.
Example: A Sigma Rule Detecting Credential Dumping Attempts
sigma_credential_dump_example.ymltitle: LSASS Access via Process Creation
id: 1a2b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d
description: Detects attempts to access LSASS process memory via administrative tools
logsource:
product: windows
service: sysmon
category: process_creation
detection:
selection:
- Image|endswith: 'mimikatz.exe'
- CommandLine|contains|all:
- 'lsass'
- 'dump'
- ParentImage|endswith: 'powershell.exe'
CommandLine|contains: 'procdump'
filter:
User|contains: 'SYSTEM'
condition: selection and not filter
falsepositives:
- Legitimate credential analysis tools
- Authorized penetration testing
level: high
status: experimental
tags:
- attack.credential_access
- attack.t1003
This Sigma rule demonstrates key features. The logsource specifies what log source it applies to. The detection section contains selectors and filters. The falsepositives section documents what might trigger the rule incorrectly. The tags section maps to MITRE ATT&CK.
When you compile this Sigma rule for Splunk, it becomes a Splunk SPL query. Compile it for Elasticsearch, and it becomes a query DSL. The detection logic remains identical; only the syntax changes.
Building Detection Rules for Common Attack Patterns
Credential Dumping (T1003)
Attackers use credential dumping to extract plaintext or hashed credentials from memory, disk, or authentication databases. This is a high-priority target because stolen credentials enable further compromise.
Observable Artifacts:
- Process creation with suspicious arguments (sekurlsa, lsass, dump)
- Access to sensitive registry keys like HKLM\SAM
- Unusual file access patterns against files in C:\Windows\System32\config
- Network SMB connections to domain controllers from unusual sources
Detection Approach: Combine process monitoring with registry access events. Create a rule that alerts on process creation events containing keywords associated with credential dumping tools (mimikatz, ProcDump, secretsdump) combined with command-line arguments matching the attack pattern. Exclude legitimate tools through whitelisting.
Lateral Movement via PsExec (T1021.002)
PsExec and similar tools enable lateral movement by executing commands on remote systems. Detection requires tracking both the tool execution and its network behavior.
Observable Artifacts:
- SMB traffic to administrative shares (ADMIN$, C$, IPC$)
- Service creation events from unusual sources
- Process creation from unusual parent processes (services.exe with suspicious command-line arguments)
- Outbound SMB port 445 connections to multiple targets
Detection Approach: Monitor for combinations of SMB administrative share access followed by suspicious process creation events. Threshold rules work well here; alert when a single source initiates SMB connections to more than N distinct targets in a time window (indicating spreading behavior).
Suspicious PowerShell Execution (T1059.001)
PowerShell is both legitimate and dangerous. Defenders see it in normal administrative activity but also in sophisticated attacks. High-fidelity PowerShell rules require deep pattern analysis.
Observable Artifacts:
- PowerShell invoked with encoded command parameters (-EncodedCommand, -enc)
- PowerShell launched from unusual parent processes (Word, Excel, browser)
- Suspicious execution policies or script execution policies being modified
- PowerShell downloading and executing remote scripts (IEX, DownloadString patterns)
High-Fidelity Rule Strategy: Rather than detecting all PowerShell execution (too many false positives), focus on specific patterns. Encoded commands are suspicious. PowerShell spawned from Office applications is suspicious. PowerShell connecting to external URLs and executing code is suspicious. Each pattern individually has context; combined they're highly indicative.
powershell_detection_logic.txtDETECTION_RULE =
(ProcessImage contains 'powershell.exe' AND
(CommandLine contains 'EncodedCommand' OR
CommandLine contains 'IEX' OR
CommandLine contains 'DownloadString')) OR
(ParentImage in ['WINWORD.EXE', 'EXCEL.EXE', 'chrome.exe'] AND
ProcessImage contains 'powershell.exe')
NOT (
User equals 'SYSTEM' OR
User contains 'admin' OR
CommandLine contains 'C:\Windows\System32\WindowsPowerShell'
)
Kerberoasting (T1558.003)
Kerberoasting attacks target Kerberos service accounts by requesting TGS (Ticket Granting Service) tickets for accounts with weak passwords. The attacker then cracks these offline.
Observable Artifacts:
- Unusual volume of TGS-REQ requests from a single source
- TGS-REQ requests for service accounts not typically requested (database service accounts, legacy application accounts)
- Requests from non-administrative sources for service account tickets
- Event ID 4769 (A Kerberos service ticket was requested) with RC4 encryption
Detection Approach: Use threshold-based rules on Event ID 4769. Alert when a single source requests TGS tickets for more than N unique service accounts in a time window. The threshold should be tuned to your environment; administrative tools may legitimately request multiple tickets, but the pattern of a single source requesting dozens of service account tickets in minutes is highly suspicious.
Tuning and Validation: Red Team Testing, False Positive Reduction, and Continuous Improvement
A detection rule is only as good as its performance in production. Theoretical rules that seem perfect in the lab often fail in real environments. Systematic tuning is essential.
Phase 1: Red Team Testing
Before deploying a rule to production, validate it against actual attack execution. Partner with your red team or conduct controlled penetration tests that specifically execute the attack pattern your rule targets. Does the rule fire? Does it fire reliably? If the rule misses attacks you're actively simulating, it will miss them in production.
Phase 2: False Positive Baseline
Deploy rules in audit mode or to a staging environment for 1-2 weeks and collect false positives. Document every false positive and its context. This data reveals patterns: which legitimate applications trigger the rule, which user workflows are affected, which business functions rely on the activity your rule targets.
Phase 3: Refinement Through Exclusions and Conditions
With false positive data in hand, refine your rule. Add conditions to exclude known-legitimate patterns. Use whitelisting sparingly but strategically. If a specific service account legitimately triggers credential dumping detection (backup software), add a targeted exclusion for that account rather than disabling the entire rule.
Phase 4: Continuous Monitoring and Feedback Loop
Production deployment is not the end. Monitor your rule's performance continuously. Track alert volume, investigation time, and confirmation rate. Rules that generate consistently actionable alerts should remain high priority. Rules that generate mostly false positives should be disabled or redesigned. Every quarter, review rules that have been silent for extended periods; they may be outdated or overconstrained.
How CVEasy AI's CTEM Platform Automates Detection Rule Validation
Manual rule testing and validation is labor-intensive. As your detection library grows, keeping rules tuned and current becomes overwhelming. This is where Continuous Threat Exposure Management (CTEM) platforms like CVEasy AI provide significant value.
CVEasy AI's CTEM platform integrates Breach and Attack Simulation (BAS) with your detection infrastructure. Rather than manually executing attacks to test rules, the platform automates attack simulation and validates that your detection rules fire correctly.
How It Works:
- Automated Attack Execution: The BAS component executes simulated attacks representing real TTPs from your threat landscape. These attacks execute in your environment using the exact tools and techniques that attackers would employ.
- Real-Time Validation: As attacks execute, the platform monitors your SIEM and detection infrastructure in real-time. It records which detection rules fire and which attacks go undetected.
- Gap Analysis: The platform generates detailed reports identifying detection gaps. Which attacks were undetected? Which rules failed to trigger? This creates a prioritized list of detection improvements needed.
- Continuous Testing: Rather than one-off red team engagements, CTEM enables continuous automated testing. Run the same attack scenarios monthly or quarterly to validate that your detections remain effective as your environment evolves.
- Rule Performance Metrics: The platform provides quantitative metrics on rule performance: false positive rate, detection latency, detection coverage by tactic and technique.
Practical Benefits:
- Reduce detection gaps from months of discovery to weeks of systematic testing
- Validate rule changes before production deployment with confidence
- Identify outdated rules that no longer represent current threats
- Justify detection investments to leadership with objective performance data
- Enable security teams to focus engineering effort on high-impact gaps
CVEasy AI's CTEM platform transforms detection rule engineering from a reactive, manual process into a proactive, systematic discipline. By automating attack simulation and validation, the platform ensures your rules remain relevant and effective as your threat landscape evolves.
Conclusion: Building a Sustainable Detection Program
Building high-fidelity detection rules is not a project; it's a program. It requires systematic methodology, clear prioritization, and continuous improvement. The three pillars of effective detection rule engineering are:
- Structured Approach: Map rules to MITRE ATT&CK, use vendor-agnostic formats like Sigma, and document rule logic clearly.
- Rigorous Tuning: Test rules against actual attacks, measure false positive rates, and continuously refine based on production performance.
- Automation and Scale: Use CTEM and BAS platforms to validate rules systematically and identify gaps before attackers exploit them.
The security teams that succeed are those that move beyond alert fatigue. They build focused, high-precision detection rules that their analysts trust. They understand their threat landscape deeply and invest detection resources accordingly. They validate continuously and improve relentlessly.
This is the practice of high-fidelity detection rule engineering. It's challenging work, but it's precisely the work that separates organizations that detect breaches quickly from those that discover them months or years later.