Methodology and Transparency

This page documents exactly how Q Signals computes outputs so users can interpret results with context, not blind trust.

Signal Decision Framework

Gate 1: Composite Threshold

Weighted composite score must be above +0.10 for BUY or below -0.10 for SELL.

Gate 2: Vote Consensus

At least 60% of active modules must agree on direction, reducing one-module false positives.

Confidence

Confidence rises when score is farther beyond threshold and module agreement is stronger. Borderline setups show lower confidence.

Runtime and Fallback Transparency

DQN Runtime Mode

Primary mode is a Deep Q-Network (PyTorch) with 10 core signal inputs and BUY/HOLD/SELL Q-value output.

Fallback Mode

If DQN runtime dependencies are unavailable, tabular fallback mode is used so analysis remains available.

Operational Principle

Fallback behavior prioritizes continuity over perfect feature parity. Endpoints may return simplified or partial results under degraded conditions.

What the 23 Modules Measure

ModulePrimary Input
FibonacciRetracement and extension structure from recent price swings
Elliott WaveWave-pattern structure and trend phase
Swing StrategySMA crossover, RSI, Bollinger Band context
Sentiment / SocialNews and social tone direction
Insider / Institutional / CongressOwnership and disclosed transaction behavior
Supply Chain / CompetitorPeer stress and dependency signals
Volume + Multi-TimeframeVolume confirmation and cross-timeframe trend alignment
Sector / Correlation / VIX RegimeMacro regime and cross-asset flow context
Economic / AnalystMacro indicator direction and analyst consensus
Price Trend / Volatility / 52w / Moving AvgCore technical direction and overextension
RL Agent (DQN)Learned BUY/HOLD/SELL Q-values from historical outcomes

Targets, Stops, and Holding Period

Targets and stop-loss values are ATR-scaled estimates, not guarantees. Every signal includes conservative, moderate, and aggressive targets plus a suggested hold duration that expands or contracts with volatility and selected horizon.

RL Training Stages

StepsStageInterpretation
0-500InfantMostly exploratory behavior
500-5,000LearningEarly pattern formation
5,000-25,000DevelopingImproving but still unstable by regime
25,000-100,000IntermediateStronger pattern recognition and consistency
100,000+ExperiencedHigh training volume, still requires ongoing validation

How KPI Pages Are Calculated

KPIs are computed from evaluated outcomes in signal history. Win rate, expectancy, and alpha are retrospective statistics and can change as more outcomes settle. Leaderboards are grouped by horizon and market regime over a rolling lookback window.

Signal Outcome Evaluation Rules

CheckRule
TargetsEach directional signal tracks three targets (T1/T2/T3). If any target is hit before deadline, outcome is WIN (with level recorded).
StopIf stop-loss is breached before any target, outcome is LOSS (Stopped).
ExpirationIf deadline passes without target/stop hit, outcome is LOSS (Expired).
Pending WindowSignals remain pending until enough time has passed for evaluation under selected horizon.

Interpretation Guidance

Highest-conviction setups generally have: threshold-clearing composite, strong module agreement, acceptable risk/reward, and RL agreement. Even then, all results remain probabilistic. Treat outputs as structured research evidence, not guaranteed outcomes.

Markets are non-stationary. Methodology transparency improves interpretation but does not eliminate risk, uncertainty, or model drift.