Skip to main content
Back to Product
Complete Platform

Enterprise-Ready From Day One

36+ admin pages, 14 retrieval configuration screens, 15 health monitoring dashboards, 7 analytics views. Everything you need to deploy, tune, and operate knowledge retrieval at enterprise scale.

Intelligent AI Pipelines

Self-correcting retrieval that decomposes, validates, and refines every answer automatically

Multi-Strategy Retrieval

Vector, keyword, and knowledge graph search fused into one ranking

Enterprise Security

On-prem deployment, DLP policies, and collection-level access control

36+ Admin Pages

Full control panel: health monitoring, analytics, retrieval tuning, and more

Capabilities

Complete Feature Set

Not a wrapper around an LLM. A production platform with deep configurability for teams that need to tune, monitor, and trust their AI.

01

Intelligent Chat Interface

  • Real-time streaming responses
  • Confidence indicators on every answer
  • Citation tooltips with source preview
  • Conversation history with search
  • Project/folder organization
  • File attachments and drag-drop
02

AI Assistants (Personas)

  • Custom system prompts per use case
  • Task-specific instruction templates
  • A/B testing different prompts (Prompt Lab)
  • Per-channel configuration for Slack
  • Version control for prompt changes
03

Document Intelligence

  • Semantic chunking — splitting documents by meaning, not arbitrary length
  • Automatic duplicate detection
  • Quality scoring before indexing
  • PII detection and redaction
  • Metadata extraction
04

Analytics & Insights (7 Pages)

  • LLM usage stats: token costs, model distribution, trends
  • Query history with relevance scoring
  • User feedback and satisfaction tracking
  • Alert rules for anomaly detection
  • Document statistics and coverage analysis
05

System Health (15 Pages)

  • Real-time CPU, memory, and queue monitoring
  • Vespa cluster health and chunk indexing status
  • Failed document tracking and retry
  • Connector health across every integrated source
  • Database browser and system logs
  • Parsing metrics and error analysis
06

Retrieval Configuration (14 Pages)

  • 6 embedding models including Qwen3, Nomic, MxBai, and Snowflake Arctic
  • Multi-LLM support (OpenAI, Anthropic, Ollama)
  • 4 parsing providers (Unstructured.io, Adobe PDF, AWS Textract, Custom)
  • Hybrid search tuning with graph retrieval and entity boost
  • RAGAS (automated quality evaluation) framework for quality measurement
  • Guardrail sensitivity tuning (input + output)

Real Product

See the Actual Interface

No mockups. These are real screenshots from a running Courdx deployment.

Retrieval Pipeline Configuration
Courdx retrieval pipeline configuration showing 8 configuration tabs, embedding model selection, and chunking settings
14 configuration pages — embedding models, search tuning, parsing providers, prompt templates, and more
System Health Monitoring
Courdx system monitoring showing real-time CPU usage, memory consumption, API performance metrics, and retrieval pipeline health
15 health dashboards — real-time CPU, memory, queue depth, API latency, and retrieval pipeline metrics

What Makes These Features Different?

Learn about the intelligent retrieval, knowledge graph, and trust systems that power Courdx.

See These Features In Action

Schedule a demo to explore the full platform with your team.