Hasan Shaikh

Clinical Data Scientist • Medical AI Researcher

Developing CT radiomics models for locoregional recurrence prediction in head and neck cancers and 3D deep learning architectures for automated tumor segmentation.

Currently at QIRAIL, CMC Vellore, working on multi-institutional validation studies and FAIR-compliant imaging infrastructure.

Hasan Shaikh

Research Focus

My work addresses a fundamental challenge in medical AI: how can radiomics models maintain predictive performance across diverse anatomical subsites and heterogeneous clinical settings?

I develop CT radiomics models for locoregional recurrence prediction in head and neck cancers, working across multiple anatomical subsites to investigate feature stability and model generalizability. Current projects include prognostic model development from pretreatment CT imaging and nnU-Net architectures for automated tumor segmentation.

At QIRAIL, I'm building FAIR-compliant imaging infrastructure supporting multi-institutional retrospective studies—standardizing DICOM processing pipelines, implementing quality control workflows, and ensuring reproducibility in feature extraction.

Background

QIRAIL, CMC Vellore (Current)

Clinical Data Scientist developing CT radiomics models for locoregional recurrence prediction in head and neck cancers across multiple anatomical subsites. Building automated segmentation pipelines using nnU-Net, extracting and analyzing radiomic features from pretreatment imaging, and establishing FAIR data infrastructure for multi-center studies.

Aligarh Muslim University

M.Tech in Artificial Intelligence (CGPA: 8.80/10). Thesis: Multimodal survival prediction integrating clinical, imaging, and genomic data. Investigated late fusion strategies and feature-level integration for Cox proportional hazards models.

Research Interests

Medical image analysis, particularly radiomics feature stability and generalization. Deep learning for segmentation in low-data regimes. Uncertainty quantification in clinical prediction models. Multi-modal learning for oncology applications.

Technical Work

Radiomics pipelines using PyRadiomics for feature extraction from pretreatment CT imaging. Investigating first-order, shape, and texture features for locoregional recurrence prediction in head and neck cancers across multiple anatomical subsites.

Deep segmentation models adapted from nnU-Net for lymph node delineation. Exploring data augmentation strategies and transfer learning approaches for small medical imaging datasets.

DICOM processing infrastructure with AWS S3 for decentralized data management. Implementing automated quality control, standardized preprocessing, and version-controlled feature extraction for reproducible research.

PyRadiomics
PyTorch / nnU-Net
SimpleITK / Scikit-image
Cox Models / Survival
DICOM / NIFTI
AWS S3 / FAIR Data

Focus on methodological rigor, external validation, and reproducible workflows that advance clinical translation.

PhD Interests (Fall 2025 / Spring 2026)

Seeking positions in Medical Image Analysis, Computer Vision, or Biomedical Engineering with focus on:

Generalization of radiomics models across institutions and imaging protocols
Uncertainty quantification in clinical prediction systems
Multi-modal learning integrating imaging, clinical, and genomic data
Self-supervised learning for medical imaging with limited annotations

Interested in groups emphasizing rigorous validation, clinical collaboration, and reproducible research practices.