TRIS v2 Launch Patent Pending 30-Day Iteration Vulnerability Scoring

TRIS v2: The 12-Layer Vulnerability Intelligence Engine Built by Analysts, Not Cloud Vendors

CVSS tells you severity. EPSS predicts exploitation. Every cloud-first scoring engine sends your asset inventory, your SBOM, and your network topology off-site to get a number back. TRIS v2 runs on your hardware across twelve layers, including five dimensions no competitor has. Your data never leaves your building.

Chris Boker · Founder, CVEasy AI · April 13, 2026 · 14 min read
• TRIS v2 · Patent Pending
TRIS v2 12-layer vulnerability intelligence engine visualization

The vulnerability management industry just got more crowded. In the last six months, three AI-first scoring engines have launched. One from a hyperscaler, one from a research lab, and one from an AI foundation model company trying to convince CISOs that a cloud-hosted tiered intelligence engine is the future of CTEM. They're all impressive. They're all powerful. They all have the same structural problem.

They want your data.

Not in the abstract. Not in a "we're GDPR-compliant, trust us" way. Literally: to score your vulnerabilities the way these engines claim, you have to ship them your asset inventory, your network topology, your software bill of materials, and, if they're doing it right, your real-time BAS validation results. That data is the most sensitive inventory your organization owns. It's the shopping list a threat actor would pay seven figures for on a breach forum. And every cloud-first scoring vendor is asking you to upload it to their infrastructure so they can compute a number and send it back.

TRIS v2 is what a different answer looks like.

TRIS v1 shipped 30 days ago. TRIS v2 ships today. Twelve layers. Five of them brand new. 100% local-first. Patent pending. Zero data exfiltration. v1 answered a question CVSS never did. A month of production use across five verticals exposed five gaps we had not anticipated. So we did not wait for a Q4 release cycle. We iterated. v2 combines static severity, exploitation probability, active exploitation, threat actor targeting, asset criticality, exposure topology, BAS validation, attack-path blast radius, supply-chain propagation, MITRE ATT&CK defense efficacy, predictive threat trajectory, and FAIR-based financial impact. In a single score. On your hardware.

What 30 Days of Production Use Exposed

Thirty days ago we shipped TRIS v1. Seven layers: CVSS base severity, EPSS exploitation probability, CISA KEV active exploitation, threat actor targeting, asset criticality, public exposure, and BASzy exploit validation. It was designed to answer a question CVSS has never answered: "How urgently do I need to fix this, on this asset, in my environment?"

TRIS v1 worked. From the first week. Early cohort data from healthcare, finance, federal systems integrators, and critical infrastructure environments showed the top 10% of TRIS v1 scores correlated with more than 80% of the vulnerabilities teams actually had to remediate. Triage time dropped by a factor of four almost immediately. CVSS-only programs got replaced with TRIS-driven programs inside the first two weeks because the signal-to-noise ratio was different by an order of magnitude.

Then the second week happened. Then the third. And the same five gaps kept surfacing. Five different verticals, five different environments, five different analysts, all pointing at the same missing pieces. When that kind of signal lines up independently across a practitioner cohort, you do not file it in a backlog for next year. You ship v2.

Thirty days from v1 to v2. That is not a marketing cadence. That is what happens when the product team and the practitioner are the same person.

Here are the five gaps. They weren't small.

Gap 1: Your network topology is a graph, not a list.

TRIS v1 treated asset criticality as a per-asset property. A crown-jewel server scored higher than a test box. Fine, as far as it goes. But it didn't ask the question that actually matters during an incident: how many assets can this vulnerability reach? A "medium" severity vulnerability one hop from your domain controller is an existential risk. The same vulnerability on an isolated development laptop is a ticket to close in the next sprint. CVSS can't tell them apart. EPSS can't tell them apart. TRIS v1 couldn't tell them apart either.

Gap 2: Your software is a dependency tree, not a list either.

When Log4Shell broke, organizations spent the first three days of the incident just enumerating which applications transitively depended on the vulnerable library. Not which applications imported it directly, which was trivial. Which applications pulled in a library that pulled in a library that pulled in a library that used Log4Shell five layers deep. That's the actual shape of modern software risk. No vulnerability scoring system quantifies transitive SBOM-aware risk. Not one.

Gap 3: A vulnerability you can defend against is not the same risk as one you can’t.

Every scoring system in the market tells you how bad a vulnerability is. None of them tell you how well you can defend against it. If CVE-X maps to three MITRE ATT&CK techniques and your EDR already detects all three with high confidence, the effective risk is materially different from CVE-Y that maps to three techniques your controls don't cover at all. Scoring systems ignore defensive posture entirely. That's a bug, not a feature.

Gap 4: EPSS is backward-looking.

EPSS is one of the best things to happen to vulnerability management in the last decade. I've defended it in public and I'll defend it again. But it's trained on historical data. It's very good at telling you what's been exploited before. It's less good at telling you what's about to be exploited next week, the vulnerability whose exploit kit just tripled in fork velocity on GitHub, whose chatter just spiked on Russian-speaking exploit forums, whose PoC just graduated from a shell script to a Metasploit module. Those signals are observable. TRIS v1 ignored them.

Gap 5: Your board asks you how much it costs, and you hand them a 9.8.

Security teams report in CVSS. Boards report in dollars. Every other function in your organization, finance, legal, operations, has figured out how to translate technical state into monetary impact. Security hasn't. Or more precisely: security has a framework for it (FAIR), and nobody's built it into a scoring engine that runs automatically. Until now.

The Five Novel Layers

TRIS v2 closes every one of those gaps. Layers 1 through 7 are the same battle-tested signals TRIS v1 shipped with. Layers 8 through 12 are new, patent pending, and none of them exist in any other vulnerability scoring system on the market.

L8 Attack Path Blast Radius New

Graph-based lateral movement. Models your network as a directed graph. Quantifies how many assets a vulnerability can reach and topological proximity to crown-jewel systems.

L9 Supply Chain Propagation New

SBOM-aware transitive risk. How deep in your dependency tree, how many applications are affected, whether a fixed version exists. Log4Shell, modeled correctly.

L10 Defense Efficacy Coefficient New

MITRE ATT&CK technique coverage mapping. Percentage of the exploitation chain actually covered by your defenses, freshness-weighted by BAS validation age.

L11 Predictive Threat Trajectory New

Forward-looking momentum forecast. Tracks week-over-week exploit development velocity and forum chatter. Fast-movers, before they hit the KEV catalog.

L12 Financial Impact (FAIR) New

FAIR-based dollar-value risk. Primary loss plus secondary loss (GDPR, HIPAA, PCI) plus productivity loss. The number your board actually understands.

I won't bore you with the exact weights here, that's in the white paper, and the proprietary multipliers are in the patent filings. What I want you to understand is that these aren't five individual features. They're a coordinated intelligence engine. When L8 says "this vulnerability can reach 47 downstream systems through 3 pivot paths," and L9 says "it's three dependency hops deep and affects 12 production apps," and L10 says "your ATT&CK coverage of this chain is 31%," and L11 says "exploit forks tripled this week," and L12 says "expected loss is $1.94M," the composite is not the sum of those numbers. It's the correlation of them.

The Architecture Choice That Matters

Here's what I want to be very direct about, because it's the part that separates TRIS v2 from every AI-first cloud scoring engine currently trying to eat this market.

Three of TRIS v2's twelve layers. Layer 5 (asset criticality), Layer 8 (attack path blast radius), and Layer 9 (supply chain propagation), require deep knowledge of your internal network topology, your complete asset inventory, and your full software bill of materials.

Sending that data to a cloud scoring service is not a minor engineering choice. It is a compounding risk. Your SBOM is a complete list of every exploitable library in your production stack. Your network topology is a complete map of every segmentation boundary a threat actor would need to bypass. Your asset inventory is a complete accounting of every crown-jewel system worth targeting. Handing that information to a vendor, any vendor, however trustworthy, is a decision that should be made with eyes open and with architectural alternatives considered first.

Cloud-first scoring engines don't consider the alternative. They can't. Their business model requires centralization. Their ML training requires pooled customer data. Their architecture is optimized for one thing: getting your intelligence into their infrastructure so they can learn from it and sell the improved model back to you, their competitor, and any government entity that files the right paperwork. That's not a conspiracy theory. It's the literal TOS on several of them. Read them.

TRIS v2 doesn't make that tradeoff because it doesn't need to. It's built on a local-first architecture, which means three things:

This isn't a compliance talking point. It's the architecture.

Your scoring engine should answer to practitioners, not to someone else's cloud.

A Tale of Two CVEs

Let's make this concrete. Here are two vulnerabilities CVSS and TRIS v1 would score similarly, and what TRIS v2 does with them. The examples are composited from real customer deployments. CVE numbers changed.

CVE-2026-EXAMPLE-A Internal monitoring tool

CVSS: 9.8 Critical · EPSS: 0.04 · KEV: No

L8 Attack Path: Affected asset is 4 hops from crown-jewel via network segmentation. Limited blast radius.

L9 Supply Chain: Direct dependency only. One application affected. Patch available.

L10 Defense Efficacy: Mapped ATT&CK techniques 92% covered by your EDR. BAS validation 12 days old.

L11 Trajectory: Decaying. No new exploit activity in 30 days.

L12 Financial: $38K expected loss (primarily productivity).

CVSS says: Fix first.  →  TRIS v2 says: Band 03 · TRACK (score: 52)

CVE-2026-EXAMPLE-B Public-facing API gateway

CVSS: 7.5 High · EPSS: 0.91 · KEV: Yes

L8 Attack Path: Affected asset is directly adjacent to the identity provider. 47 downstream systems reachable.

L9 Supply Chain: Transitive dependency, 3 hops deep. Affects 12 production applications. No coordinated patch yet.

L10 Defense Efficacy: Mapped ATT&CK techniques 31% covered. BAS validation failed last run.

L11 Trajectory: Accelerating. Exploit forks tripled week-over-week. Two named APT groups actively adopting.

L12 Financial: $1.94M expected loss (primary + regulatory under SOC 2 + productivity).

CVSS says: Fix second.  →  TRIS v2 says: Band 01 · ACT (score: 94)

The CVSS ordering is backwards. Every hour your team spends patching Example A while Example B is live and accelerating is an hour burned on the wrong problem. TRIS v2 surfaces this inversion on day one, not after the incident review.

What's Different from "Tiered AI Engines"

I want to address the category of product that TRIS v2 is going to get compared to, because the comparison is inevitable and it deserves a direct answer.

There are newer vulnerability intelligence engines that call themselves "tiered," "AI-native," or "LLM-powered." Some of them are impressive. Some of them are backed by credible research organizations. They are, nearly without exception, cloud-first. Your data goes to them. Their model scores it. The score comes back. You trust them.

That architecture is optimized for one thing: maximizing the training signal the vendor captures from your production environment. It's a good deal for the vendor. The value they capture grows faster than the value they return. Every customer makes their model better, and every improvement widens the moat around their platform. Classic SaaS flywheel. Great for investors. Not great for CISOs.

TRIS v2 is the opposite of that. It was built by someone who works in SOCs, not someone building a category. The architecture is:

This is what "built by practitioners, for practitioners" looks like when it's architectural, not marketing.

What Ships Today

TRIS v2 is in CVEasy AI v1.1, shipping now. Existing CVEasy AI customers will see new columns in their vulnerability views within the next scheduled update window. Net-new customers will get TRIS v2 out of the box.

Specifically, TRIS v2 brings:

Twelve independent intelligence layers. Five of them brand new. Zero cloud dependency. One actionable score.

Why This Matters

Vulnerability management is one of the few domains in security where the scoring primitive genuinely controls everything downstream. If your scoring is wrong, every decision stacked on top of it, ticket routing, SLA assignment, remediation prioritization, board reporting, compliance attestation, is also wrong. CVSS has been the default scoring primitive for fifteen years. Fifteen years is a long time to accept "only 2.3% of high-severity CVEs are ever exploited" as a state of affairs.

TRIS v2 is our second shot at the problem, shipped 30 days after the first one. It is not the final answer. It will not be. Something will replace it eventually, and when that happens I hope the team that builds it iterates as fast as we did and stays as close to the practitioner as we are. But for today, for the current state of vulnerability intelligence, for the current state of threat actor tradecraft, and for the current economics of cloud-first SaaS, TRIS v2 is the most honest answer we know how to build.

Twelve layers. Local-first. Transparent. Patent pending. Zero data exfiltration.

Built by an analyst, not a category.

See TRIS v2 on your own scan data.

Request a demo and watch the engine rescore your real vulnerability backlog across all twelve layers. Your data stays in your environment. Always.