ADS 404 CAPSTONE PROJECT

A Multimodal Ensemble Approach for MDD Detection.

An advanced Machine Learning framework integrating physiological (EEG), acoustic, and semantic data to extract objective biomarkers for Major Depressive Disorder.

Launch Assessment Dashboard

Addressing the Diagnostic Gap

Currently, screening for Major Depressive Disorder (MDD) relies heavily on subjective self-reporting (e.g., PHQ-9) and clinical observation. This project employs Deep Learning and real-time spectral feature extraction to analyze physiological and behavioral signals directly. By fusing these data streams, the model generates a robust, objective screening metric to corroborate clinical judgment and mitigate inter-rater variability.

Tripartite Ensemble Architecture

Our ensemble machine learning architecture trains specialized models to analyze three complementary data modalities simultaneously.

🧠

Frontal Lobe Spectral Analysis

The system analyzes your resting-state brainwaves across Theta, Alpha, Beta, and Gamma bands via a single-channel TGAM biosensor. It extracts signal variance and logarithmic spectral features from the prefrontal cortex (FP1) to identify physiological baseline patterns.

🎙️

Wav2Vec 2.0 (Audio)

A Wav2Vec 2.0 deep learning acoustic model processes your voice. After utilizing Voice Activity Detection (VAD) to trim static silence, it extracts 768-dimensional contextual embeddings, effectively capturing psychomotor retardation indicators like flattened pitch and slowed prosody.

📝

Bio-ClinicalBERT Semantic (Text)

A Whisper ASR model transcribes your speech, and a clinically fine-tuned Bio-ClinicalBERT transformer processes the meaning. It utilizes NLP to detect cognitive schemas, absolutist linguistic markers, and depression-specific semantics embedded in your verbal responses.

Ensemble Learning

Late Fusion Meta-Learner

A Logistic Regression meta-classifier dynamically weights the probability vectors from the three sub-models to maximize overall accuracy.

Evaluation Methodology

A decoupled edge-cloud data extraction pipeline. Heavy model inference is executed securely on your Local Edge Node to bypass cloud compute limits while ensuring strict data privacy.

Phase A

Device Interface

Securely establish a connection with the single-channel TGAM biosensor via the Local Edge Node's serial port, and grant browser microphone permissions for acoustic capture.

Phase B

Resting-State Calibration

Complete a 30-second eyes-closed session. The system filters the raw EEG stream to isolate necessary frequency bands and establish your baseline functional connectivity.

Phase C

Multimodal Capture

Respond verbally to clinical prompts. The system concurrently buffers the acoustic waveform and transcodes it while maintaining the live EEG telemetry stream.

Phase D

Inference & Output

The machine learning models extract features from the matrices and output a fused classification result. Modality weights are rendered via an explainable UI radar chart.

Local Inference Records

Evaluation states are stored strictly within window.localStorage.

Timestamp Ensemble Classification Action