Early Access Preview

Quick Start

Get CVEasy AI running on your hardware in under 10 minutes. No cloud account. No API key required to get started.

Prerequisites
Apple Silicon Mac
M1, M2, M3, or M4 chip required
Ollama
Local AI inference. ollama.com
Bun Runtime
JavaScript runtime. bun.sh
Lite Tier
Mac Mini M2+ · 16 GB unified memory
Single AI model (~5 GB VRAM). Perfect for core VM.
Pro Tier
Mac Studio M2 Max+ · 64 GB unified memory
Multi-model routing (~15 GB VRAM). Full suite + BASzy.
1

Install Ollama and pull your model(s)

CVEasy AI uses the CVEasy AI Engine (powered by Ollama) for local AI inference. Install it, then pull the recommended model for your tier:

# Install Ollama (macOS) brew install ollama # Start the Ollama server ollama serve
Lite (16 GB Mac)
# Single optimized model (~5 GB) ollama pull cveasy-lite
Pro (64 GB Mac)
# Primary engine (~10 GB) ollama pull cveasy-pro # Code engine (~5 GB) ollama pull cveasy-coder

CVEasy AI auto-detects available models on startup. Pro users can configure model routing in Settings to assign specific models to remediation, chat, reports, code generation, and analysis tasks.

2

Download and run CVEasy AI

CVEasy AI is distributed as a single binary. Download the latest release for your platform:

# Extract your installation package tar -xzf cveasy-ai-*.tar.gz cd cveasy-ai # Install dependencies bun install # Start the server bun run start

CVEasy AI starts on http://localhost:3001. Open it in your browser.

3

Configure your company profile

Open Settings and set your industry vertical and compliance frameworks. This calibrates the TRIS score to your environment. A healthcare org and a retail org have different patch priorities for the same CVE.

Select your industry (Healthcare, Finance, Retail, Critical Infrastructure, etc.)
Select applicable compliance frameworks (HIPAA, PCI-DSS, SOC 2, NIST)
Confirm AI provider is set to Ollama (default)
4

Ingest your first CVEs

Three ways to get vulnerability data into CVEasy AI:

Option A: Search NVD directly
Use the search bar to look up CVEs by ID (CVE-2024-1234) or keyword. CVEasy AI pulls live data from NIST NVD and enriches it automatically.
Option B: Paste scan output
Copy a CVE list from Nessus, Qualys, or any scanner and paste it into the Ingest panel. Supports JSON and plain CVE ID lists.
Option C: Browse recent CVEs
The dashboard shows the last 30 days of NVD publications, pre-filtered by EPSS and KEV status. Start there to see what's currently active.
5

Review your triage queue

Open the CVE Triage Queue. Every ingested CVE has been automatically scored and ranked. Findings are sorted by TRIS score. KEV-listed CVEs are pinned at the top regardless of CVSS.

Click any CVE to open the detail view and generate an AI remediation guide. The AI runs locally via Ollama. No data leaves your machine.

6

(Optional) Switch to a cloud AI provider

Ollama works great for most deployments. If you want higher-quality analysis or don't have hardware to run a local model, you can switch AI providers in Settings:

# In Settings → AI Provider Provider: OpenAI / Azure OpenAI API Key: (stored encrypted, never leaves your server) Model: gpt-4o / etc.

API keys are encrypted at rest with AES-256-GCM. They are never exposed via the API or sent to any third party other than the AI provider you select.

What's next