Independent Researcher · Taiwan

RZVN

CONTEXT HALTED, SILENCE SPEAKS.

When users engage in prolonged conversations with AI, contextual hallucinations emerge — not from model errors, but from the interaction itself. These reshape cognition, trust, and self-understanding.

Current safety research focuses on the model side. This work addresses the user side.

— USCH Preprint, 2026

5 Frameworks
5 DOI Publications

Research
Architecture

Five connected layers addressing user-side contextual risk

Layer 01

CXC-7

Conversational Context Framework

Seven dimensions for analyzing conversational context risk as a multi-dimensional structure.

DOI 10.5281/zenodo.18615646 v1.1.0, 2025
Read Paper
Layer 02

CXOD-7

Contextual Offense-Defense Framework

System-side contextual operation dimensions with Contextual Coherence Coh(G).

DOI 10.5281/zenodo.17403793 2025
Read Paper
Layer 03

USCH

User-Side Contextual Hallucination

A non-clinical construct describing user-side phenomena emerging through prolonged AI interaction.

DOI 10.2139/ssrn.6135732 Preprint, 2026
Read Paper
Layer 04

USCI

Post-Interaction Assessment Method

Pre-empirical methodology with four-axis scoring (FR, CA, SR, SA) for user-side contextual risk.

DOI 10.5281/zenodo.18678458 v1.0.0, 2026
Read Paper
Layer 05

A-CSM

AI Contextual Signal Matrix

User-side independent detection and assessment framework. Dual-pipeline architecture with deterministic rule-based and semantic analysis across four risk axes (FR, CA, SR, SA).

v0.1.0 Beta, March 2026 CC BY-NC-SA 4.0
Read Report

A-CSM
AI Contextual Signal Matrix

A-CSM is a deterministic pipeline that processes AI conversations through 8 sequential stages to produce a structured risk assessment report. It evaluates user-side contextual risk across four orthogonal axes, generating release decisions, risk bands, and evidence-backed findings.

Unlike conventional AI safety tools that evaluate model output toxicity, A-CSM analyzes the entire conversational context — detecting factual fabrication, boundary bypass attempts, safety violations, and system-level anomalies across every turn of interaction.

Each report includes a go/no-go release gate, a composite risk band (MINIMAL to CRITICAL), stability index, confidence score, and full audit trail — all computed deterministically without clinical inference.

Deterministic Non-clinical Non-diagnostic Non-punitive
Beta 1.0.0 · 2026

Four Orthogonal Axes

Each axis is scored 0–4. Higher scores indicate greater contextual risk. Events detected across all conversation turns are classified by severity and mapped to the corresponding axis.

FR

Factual Reliability

Detects fabricated citations, invented statistics, unverifiable sources, and contradictions with established datasets. Measures the factual integrity of AI-generated content within the conversation.

Example signals

Fabricated citation from non-existent paper · Invented statistic not found in official dataset · Fake quote attributed to authority

CA

Contextual Awareness

Identifies instruction drift, context conflicts, role confusion between assistant and moderator, boundary bypass attempts, and task scope jumps that may compromise interaction integrity.

Example signals

Instruction conflict with safety guidelines · Role confusion between assistant and moderator · Ambiguous policy reference as boundary bypass

SR

Safety Risk

Flags self-harm hints, violence instruction requests, harassment patterns, credential request techniques, social engineering methods, financial scam patterns, malware instruction indicators, and privacy leak concerns.

Example signals

Self-harm hint with violence instruction · Credential request via social engineering · Medical emergency reference with legal advice risk

SA

Situational Awareness

Monitors system-level anomalies including resource exhaustion, crash loops, dependency failures, rate limit bursts, storage corruption warnings, and infrastructure-level indicators that affect the reliability of interaction context.

Example signals

Timeout repeated error and crash loop · Resource exhaustion on primary node · Dependency failure on auth service with rate limit burst

Event Severity Classification

Critical
High
Medium
Low

Sample distribution from A-CSM report · Total events are aggregated across all axes and conversation turns

6-Layer Evidence Chain

Every A-CSM report is produced through a 6-layer deterministic pipeline. Each layer feeds forward into the next, building a complete evidence chain from raw input to final release decision.

01

Input Normalization & De-Identification

Conversation turns are ingested, normalized, and PII is removed through DEID masking. Output: sanitized turn count and replacement log.

8 turns, 0 PII replacements
02

Event Detection & Threat Identification

The Event Engine scans every turn for risk signals across all four axes (FR, CA, SR, SA). Each event is classified by severity: Critical, High, Medium, or Low.

46 unified events
03

VCD Inference

The Violation-Context-Decision state machine evaluates detected events through four progressive states: CLEAR → GUARDED → TRIGGERED → LOCKDOWN.

LOCKDOWN (6 events)
04

Pattern Recognition & Escalation

The Ledger/Repeat Engine tracks pattern repetition. TAG Escalation computes a Threat Assessment Group level based on accumulated evidence weight.

TAG level: HIGH
05

Risk Derivation & Validation

PS/SUB/F/E scoring derives composite risk. The Schema Invariant Service validates safety schema integrity. A FAIL triggers additional review flags.

Schema: FAIL
06

Final Decision & Audit Trail

The Release Gate issues a GO or NO_GO decision based on blocking findings count. A full audit trail is generated with confidence score, stability index, and risk band classification.

NO_GO (6 blocking)

VCD State Machine

The Violation-Context-Decision (VCD) engine is a core deterministic component. It tracks conversation state through four escalation levels, where each violation event can push the state forward but never backward.

CLEAR

No active violations detected. Baseline state at the start of every assessment.

GUARDED

Initial signals detected. The pipeline enters elevated monitoring with increased sensitivity.

TRIGGERED

Confirmed pattern escalation. Multiple high-severity events across axes exceed threshold.

LOCKDOWN

Maximum risk state. Critical blocking findings present. Release gate returns NO_GO.

Executive Output

Each A-CSM report produces a structured executive summary with quantified metrics. The report includes risk axis scoring, turn-by-turn event analysis, key evidence findings, VCD state trace, complete pipeline execution log, and sanitized conversation records.

All outputs are deterministic and reproducible. AI-augmented sections (CXC-7 dimension extraction, USCH assessment) are clearly marked, while core risk scoring remains fully rule-based.

Sample Report Metrics
Release Gate NO_GO
Risk Band CRITICAL
Total Events 46
Blocking Findings 6
Confidence 0.8
Stability Index 0.383
VCD Status LOCKDOWN
TAG Level HIGH

Report ID: 5a65d119-cdcd-4e38-bde7-48022b732940

Boundaries & Disclaimers

This report is NOT a medical diagnosis, psychological evaluation, clinical assessment, or any form of professional health-related determination. A-CSM is a technical risk assessment tool designed to evaluate AI conversation safety characteristics through deterministic analysis.

Risk scores are deterministically computed. AI-augmented sections (CXC-7, USCH) are model-generated and may contain inaccuracies. Outputs do not constitute legal, regulatory, compliance, or professional advice.

This report may contain sanitized conversation data. PII should have been removed during de-identification. Handle per your data classification policy.

Provided "as-is" without warranty of any kind. Designed for research and internal assessment. No warranty, express or implied, regarding accuracy or fitness for purpose.

For research and internal assessment purposes only. All risk scores and classifications are computational outputs, not professional judgments.

Video Walkthrough

User-Side Hallucination (English)

AI 安全:使用者端語境幻覺(繁體中文)

Status & Access

A-CSM Beta 1.0.0 is available as a deterministic pipeline for evaluating AI conversation safety. Released under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) for review, academic discussion, and pilot collaboration.

DOI 10.5281/zenodo.14889729 Beta 1.0.0 · 2026 CC BY-NC 4.0

Four-Axis
Context Space

The USCI scores along four independent axes, each measuring a distinct dimension of user-side contextual risk. Farther from the center indicates a higher-risk contextual region.

FR Fact Reliability
CA Context Alignment
SR User-side Safety
SA System Usability

Source: USCI v1.0.0 · DOI 10.5281/zenodo.18678458 · ZON RZVN, 2026

FR CA SR SA Center = Low Risk Outer = High Risk

Original
Papers

Direct access to original paper versions. No content rewriting.

Abstract crystal with light
CXC-7

Conversational Context Framework

Seven dimensions for analyzing conversational context risk as a multi-dimensional structure.

DOI 10.5281/zenodo.18615646
Read Online →
Glass cubes overlapping
CXOD-7

Contextual Offense-Defense Framework

System-side contextual operation dimensions with Contextual Coherence Coh(G).

DOI 10.5281/zenodo.17403793
Read Online →
Raven in flight
USCH

User-Side Contextual Hallucination

A non-clinical construct describing user-side phenomena emerging through prolonged AI interaction.

DOI 10.2139/ssrn.6135732
Read Online →
Abstract geometric perspective corridor
USCI

Post-Interaction Assessment Method

Pre-empirical methodology with four-axis scoring (FR, CA, SR, SA) for user-side contextual risk.

DOI 10.5281/zenodo.18678458
Read Online →
Abstract signal visualization
A-CSM

AI Contextual Signal Matrix

A user-side independent detection and assessment framework for LLM contextual hallucination. Dual-pipeline architecture with deterministic rule-based and semantic analysis across four risk axes.

Release v0.1.0 Beta · March 2026
Read Report → GitHub Repository →

All papers are publicly accessible via Zenodo, SSRN, or GitHub. This site provides direct links to original versions only.

Positioning &
Boundaries

USCH is a non-clinical research construct, not a psychiatric diagnosis.

USCI is a pre-empirical methodology specification, not for clinical or legal decisions.

This website focuses on public research communication and direct access to original papers.

Why This
Research Matters

Current AI governance frameworks leave critical user-side safety gaps unaddressed. This research provides the missing structural layer.

12+ Documented chatbot-related deaths worldwide as of 2025
42 U.S. State Attorneys General signed joint letter urging AI safety regulation
$3.6B+ Global AI safety market size (2025), projected 35.8% CAGR through 2030

Governance Frameworks & Unaddressed Gaps

Three major governance frameworks share a common blind spot: none provide structured guidance for user-side psychological safety in prolonged AI conversations.

NIST AI RMF

United States

Provides risk management categories (Govern, Map, Measure, Manage) but lacks specific guidance on user psychological safety during extended AI interaction. No framework for detecting cognitive influence patterns.

Addressed by
CXC-7 USCH USCI

EU AI Act

European Union

Classifies high-risk AI systems and mandates compliance obligations, yet contains no specific provisions for conversational AI liability or user-side contextual risk from prolonged interaction.

Addressed by
CXOD-7 A-CSM

ISO/IEC 42001

International

Establishes AI management system requirements and controls but lacks AI interaction-specific controls. No measurement methodology for contextual risk in human-AI conversation dynamics.

Addressed by
CXC-7 CXOD-7 USCI A-CSM

Emerging Policy Landscape

California SB 243

Proposed legislation requiring chatbot makers to implement safeguards against AI-induced psychological harm, directly aligning with USCH research on user-side contextual hallucination.

FTC Investigation

Federal Trade Commission inquiry into AI chatbot companies regarding children's safety and deceptive interaction patterns — areas where CXC-7 and CXOD-7 provide structured analytical frameworks.

International AI Safety Report 2025

Multi-government report acknowledging risks of human-AI interaction but offering no measurement methodology — a gap this research program directly addresses through USCI four-axis scoring.

Moore et al. (2026)

Peer-reviewed research on AI-induced delusional spirals in prolonged conversations, providing independent empirical validation of phenomena described in the USCH framework.

What This Research Provides

CXC-7

Structural vocabulary for analyzing conversational context risk across seven dimensions

CXOD-7

System-side contextual operation analysis with Contextual Coherence measurement

USCH

Non-clinical construct defining user-side phenomena from prolonged AI interaction

USCI

Four-axis scoring methodology (FR, CA, SR, SA) for quantifying user-side contextual risk

A-CSM

Deterministic pipeline producing structured risk assessment with go/no-go release gates

Together, these five frameworks form a complete research architecture — from theory (USCH) to measurement (USCI) to implementation (A-CSM) — filling governance gaps that no existing standard addresses.

Official
Communication

For collaboration, replication planning, or interview invitations.

Include institution, objective, and expected timeline.