Project Title: AI‑Powered Smart Health Assistant
Category: Web Application
Project File: Download Project File
Shakeel Saeed
shakeel@vu.edu.pk
shakeelsaeedvurnd
AI‑Powered Smart Health Assistant
Project Domain / Category:
Web App
Abstract / Introduction
Finding clear, trustworthy guidance when you’re worried about your health shouldn’t feel overwhelming. This project proposes an AI-powered smart health assistant that listens to how people actually describe their symptoms by text or optional voice, and translates that into likely conditions with strong, evidence-based next steps. Our goal is to reach around 80% top-3 accuracy on a curated symptom condition dataset, while offering practical triage advice (self-care, see a GP, or seek urgent care) and recommending the right specialist for 90%+ of valid queries. To keep the experience transparent, each result comes with an easy-to-read explanation (via SHAP/LIME) for 95%+ of outputs. The system is designed to feel fast and dependable responding in about 2.5 seconds on typical text queries and maintaining 99% uptime during the demo period. And because accessibility matters, the full interface is available in English and Urdu, with accurate voice transcription (target 90%+ on clean audio).
Safety note: This assistant offers preliminary guidance only and is not a medical diagnosis. A clear disclaimer appears on every screen, and users are encouraged to consult qualified healthcare professionals for clinical decisions.
Functional Requirements:
1. User Roles
Guest User: Submit symptoms, view results, and read basic tips.
Registered User (Patient profile): Guest features plus saved history, preferences/language, and PDF export.
Admin/Clinician Reviewer (optional): Manage knowledge base, view anonymized analytics,
approve content updates.
2. Core Use Cases
UC‑01: Submit Symptoms (Text)
Input: Free‑text description; optional metadata (age, sex, duration, severity 1–5), risk flags (pregnancy, chronic disease).
Process: NLP preprocessing → symptom entity extraction → feature vector → ML/DL classifier.
Output: Top‑k likely conditions (with confidence), triage level (self‑care / see GP / urgent care), recommended specialist.
UC‑02: Submit Symptoms (Voice) — optional
Input: Microphone audio (8–15 s).
Process: Speech‑to‑Text → same pipeline as UC‑01.
Output: Same as UC‑01.
UC‑03: Personalized Guidance
Input: User profile (age/sex/history) and current symptoms.
Process: Rule‑augmented post‑processor (e.g., if fever + age < 5 → pediatrician).
Page 16 of 167
3. Output: Next steps, home‑care tips, red‑flag warnings, specialist type, and when to seek care.
UC‑04: Explainability Panel
Input: Model output and feature representation.
Process: SHAP/LIME explainer on the final model.
Output: Top contributing symptoms/phrases, feature‑importance plot, and a short
rationale sentence.
UC‑05: Multilingual UI
Process: UI strings via i18n; Urdu/English toggle with RTL support.
Output: Full UI translated; model output labels localized.
UC‑06: History & Export (Registered)
Actions: Save queries and outputs; export last N results as PDF; delete history (GDPR‑like).
Output: Downloadable PDF with disclaimer and timestamp.
UC‑07: Admin Knowledge Base (optional)
Actions: Add/edit symptom synonyms, map conditions→specialist, edit self‑care templates, review logs.
Output: Versioned content with rollback.
UC‑08: Feedback Loop
Input: 1–5 helpfulness rating and free‑text feedback.
Output: Stored feedback; optional model‑monitoring dashboard.
3. Data & Models
Datasets: Public symptom–condition mappings (e.g., Kaggle/academic) plus curated synonym lists.
Preprocessing: Tokenization; symptom entity extraction; spelling normalization; optional Urdu transliteration support.
Model Options:
Baseline: Logistic Regression / SVM / RandomForest on multi‑label bag‑of‑symptoms.
NLP‑Enhanced: Clinical‑style embeddings (e.g., sentence‑BERT) feeding a classifier head.
Hybrid: Rule engine for red flags combined with ML predictions.
Explainability: SHAP Tree/Kernal explainers for tabular; LIME/SHAP for text.
Inputs / Outputs & Validation Inputs:
Text: 5–500 characters; block PII (names/phone/address) with regex before storing.
Voice: 16 kHz mono WAV/WEBM; max 20 s; transcribe then discard raw audio (configurable).
Outputs:
Top‑3 conditions with confidence (0–1), triage level, specialist type, explainability plot, guidance bullets, and disclaimer.
Validation & Errors:
Empty/short text → prompt for more details.
Unsafe content (self‑harm) → show emergency resources message.
Ambiguous symptoms → ask clarifying follow‑ups (fever? duration?).
Workflows
Symptom Flow: Input → Preprocess → Predict → Post‑process (rules) → Explainability → Render
→ Save (if logged in).
Page 17 of 167
Admin Content Flow: Edit KB → Validate schema → Save draft → Preview → Publish → Version bump.
6. Security & Privacy
Do not store raw audio by default; store only transcriptions if the user consents.
Hash user IDs; encrypt at rest (DB‑level).
Role‑based access control (RBAC) for Admin areas.
Include Terms/Privacy pages; explicit medical disclaimer on every result page.
Tools:
Programming & Core AI (Python)
Python 3.11+
ML/NLP: scikit‑learn; PyTorch or TensorFlow/Keras; Hugging Face Transformers; sentence‑transformers
NLP utils: spaCy (entity patterns), NLTK (tokenization/stopwords), regex
Explainability: SHAP, LIME
Speech (optional): Vosk/Coqui STT or Whisper (local), SpeechRecognition wrapper
Data: pandas, numpy
Backend & APIs
Framework: FastAPI (preferred) or Flask
Auth & Security: JWT (PyJWT), passlib/bcrypt, pydantic validation, rate‑limiting (slowapi)
Persistence: PostgreSQL (prod) / SQLite (dev); SQL Alchemy/SQLModel; Redis for caching
Storage: Local or S3‑compatible for exports (PDFs)
Frontend & UI
Web: React (Vite) or server‑rendered templates (Jinja) with HTMX/Alpine.js
UI Kit: Tailwind CSS; Plotly.js or Chart.js for explainability visualizations
i18n: i18next (web) or gettext‑style JSON catalogs; RTL CSS support
Supervisor:
Name: Shakeel Saeed
Email ID: shakeel@vu.edu.pk
MS Teams ID: shakeelsaeedvu@outlook.com
No schedules available for this project.
No reviews available for this project.