About me

I’m a Software Engineer and ML practitioner with a strong foundation in backend development, cloud infrastructure, and production-grade API design. I build secure, scalable systems using tools like FastAPI, Spring Boot, Docker, Kubernetes, and GCP, with a focus on reliability, observability, and clean engineering practices.

Alongside this engineering work, I’m increasingly interested in how machine learning models behave in real decision-making contexts especially around uncertainty, calibration, and explainability. Recent projects include deploying end-to-end ML pipelines with monitoring, SHAP-based explanations, and dashboards that surface not just predictions, but also confidence and failure modes.

I’m particularly drawn to problems at the intersection of ML, visualization, and human–AI interaction: how to communicate risk and model limits to non-experts, how interfaces can reduce over-trust in AI, and how to turn raw predictions into tools that clinicians, analysts, and operators can safely rely on. My goal is to keep bridging solid software engineering with thoughtful, research-driven AI systems.

What I'm doing

  • design icon

    Machine Learning

    Building and optimizing predictive models for data-driven decision-making.

  • Web development icon

    Data Analysis

    Performing data mining, analytical queries, and visualization of insights.

  • mobile app icon

    Software Development

    Developing scalable and efficient software solutions.

  • camera icon

    Generative AI

    Leveraging generative AI techniques for innovative solutions.

Licenses & Certifications

  1. IBM Data Science Professional Certificate

    Issued by IBM

    Courses:
    - Tools for Data Science
    - Data Science Methodology
    - Python Project for Data Science
    - Applied Data Science Capstone

  2. IBM Generative AI Engineering Professional Certificate

    Issued by IBM

    Courses:
    - Generative AI: Elevate Your Data Science Career
    - Fundamentals of AI Agents Using RAG and LangChain
    - Project: Generative AI Applications with RAG and LangChain

  3. Deep Learning and Machine Learning Certifications

    Issued by Various Institutions

    - Introduction to Deep Learning & Neural Networks with Keras
    - Machine Learning with Python

  4. Microsoft Azure Certifications

    Issued by Microsoft

    - Data Storage in Microsoft Azure
    - Microsoft Azure for Data Engineering

  5. Other Notable Certifications

    Issued by Various Institutions

    - Introduction to Artificial Intelligence (AI)
    - Data Analysis with Python
    - Fundamentals of Visualization with Tableau

Portfolio

Research

My research interests lie at the intersection of machine learning, uncertainty quantification, and human–AI interaction. I focus on how risk predictions, confidence estimates, and model explanations can be communicated in ways that align with human reasoning—especially in high-stakes domains such as healthcare.

Methodologically, I work with probability calibration (reliability curves, Brier scores), bootstrap-based uncertainty for individual predictions, and explainability techniques such as SHAP. On the interface side, I design interactive dashboards that surface model strengths, limitations, and edge cases, helping users understand not just what a model predicts, but how confident it is and why.

My broader goal is to build uncertainty-aware AI systems that support safe, calibrated decision-making—bridging my engineering background with research questions around trust, interpretability, and human-centered design.

Research Themes

  • Uncertainty icon

    Uncertainty & Model Calibration

    Studying how predicted probabilities, confidence signals, and error distributions can be measured, calibrated, and visualized for human decision-makers.

  • Explainability icon

    Explainability & Model Behavior

    Using SHAP and related attribution methods to characterize model logic, uncover failure modes, and connect uncertainty with local explanations.

  • Dashboard icon

    Clinical AI Dashboards

    Prototyping interactive dashboards that combine risk predictions, confidence intervals, and explanations to help clinicians reason about AI output.

  • Human-AI icon

    Human–AI Decision Making

    Investigating how presentation of uncertainty and explanations shapes user trust, caution, attention to edge cases, and overall decision quality.

Research-Style Projects

  1. Clinical Uncertainty & Explainability Dashboard (Heart Disease & Diabetes)

    Independent research project

    Designed calibrated logistic regression and random forest models for heart disease and diabetes risk, and built an interactive dashboard combining predicted risk, reliability curves, and bootstrap-based uncertainty. Integrated SHAP explanations to reveal feature contributions, highlight high-confidence errors, and support clinicians in evaluating model reliability.

    Keywords: uncertainty, calibration, SHAP, clinical decision support
    Links: DashboardCodeMini-Paper (PDF)

  2. Breast Cancer Diagnosis – Explainability & Error Analysis

    Pipeline engineering extended into research reflection

    Built a comparative analysis of ML models for breast cancer diagnosis and extended the work with SHAP explanations, calibration assessment, MLflow experiment tracking, and monitoring. Focused on how different models behave on borderline cases and how calibration interacts with interpretability.

    Keywords: model comparison, interpretability, reliability vs. accuracy
    Links: Code

  3. AI Cognitive Classification (Bloom’s Taxonomy NLP)

    Educational NLP and model uncertainty

    Built an NLP pipeline to classify exam questions into Bloom’s Taxonomy categories using TF-IDF, word embeddings, and classical ML models. Reflected on uncertainty, class overlaps, and potential explanation mechanisms for supporting instructors.

    Keywords: NLP, cognitive modeling, text classification
    Links: CodeArticle

  4. Toxic Online Behavior Detection with Classic ML & Transformers

    Trust and safety perspective

    Implemented classical and deep learning models (CNN, LSTM, zero-shot transformers) for toxic comment classification. Explored ambiguous cases, uncertainty in multi-label settings, and opportunities to surface model caution or limitations to moderators.

    Keywords: robustness, ambiguity, safety systems
    Links: Code

Working Papers & Writing

  1. Clinical Uncertainty Design Space (Mini-Paper)

    Draft in progress

    A mini-paper synthesizing lessons from my uncertainty dashboard work into a conceptual design space for clinical AI interfaces. The paper outlines strategies for layering calibrated risk, interval/bootstrapped uncertainty, and local explanations without cognitively overwhelming users.

    Status: internal draft prepared for prospective PhD advisors
    Links: Mini-Paper (PDF)

  2. Automating Question Classification with AI

    Medium article

    A practitioner-facing article summarizing my Bloom’s Taxonomy NLP project, covering data processing, model selection, and evaluation, along with reflections on how such systems can support instructors in structuring assessments more effectively.

    Links: Read on Medium

Contact

Contact Form