--- a +++ b/README.md @@ -0,0 +1,20 @@ +<div class="sc-cmRAlD dkqmWS"><div class="sc-UEtKG dGqiYy sc-flttKd cguEtd"><div class="sc-fqwslf gsqkEc"><div class="sc-cBQMlg kAHhUk"><h2 class="sc-dcKlJK sc-cVttbi gqEuPW ksnHgj">About Dataset</h2></div></div></div><div class="sc-jgvlka jFuPjz"><div class="sc-gzqKSP ktvwwo"><div style="min-height: 80px;"><div class="sc-etVRix jqYJaa sc-bMmLMY ZURWJ"><p>This dataset introduces a Multi-Organ Gynaecological Magnetic Resonance Imaging for Brachytherapy-based Oncology (MOGaMBO) dataset, a novel magnetic resonance imaging dataset aimed to advance research in applications of computational intelligence in brachytherapy diagnosis and organ segmentation for cervical cancer treatment. The dataset comprises high-resolution T2-weighted 3D MR scans from 94 patients with locally advanced cervical cancer (stages IB2–IVA), adhering to FIGO guidelines for interstitial and intra-cavitary brachytherapy. The imaging was performed using a 1.5T GE Signa Explorer scanner, with acquisition parameters TR and TE set to optimal values for soft-tissue contrast at 2600ms and 155ms, respectively, combined with a pixel resolution of 0.5 × 0.5 mm² and 30–50 slices per scan. To ensure dosimetric consistency, bladder volume was standardized via Foley catheterization during imaging. The critical organs-at-risk—urinary bladder, rectum, sigmoid colon, and femoral heads, were manually contoured by expert radiation oncologists using the open-source ITK-SNAP platform, ensuring precise region-of-interest annotations. The dataset underwent rigorous deidentification to protect patient privacy, removing all demographic and identifiable information. MOGaMBO provides a standardized, privacy-compliant resource for developing and validating medical image segmentation or representation learning algorithms, and brachytherapy-related research tools. This dataset addresses a critical gap in accessible, multi-organ imaging resources for the gynaecological brachytherapy dataset, with applications in treatment planning, and AI-driven clinical research.</p> +<p>If you use this dataset, then cite the following work in the appropriate format</p> +<p>BibTex:</p> +<p><a aria-label="@misc (opens in a new tab)" target="_blank" href="https://www.kaggle.com/misc" data-id="2b41b988-0043-46ac-93d3-707c73ce814a" data-user-name="misc" class="user-mention">@misc</a>{manna2025,<br> + title={Federated Self-Supervised Learning for One-Shot Cross-Modal and Cross-Imaging Technique Segmentation}, <br> + author={Siladittya Manna and Suresh Das and Sayantari Ghosh and Saumik Bhattacharya},<br> + year={2025},<br> + eprint={2503.23507},<br> + archivePrefix={arXiv},<br> + primaryClass={cs.CV},<br> + url={<a rel="noreferrer nofollow" aria-label="https://arxiv.org/abs/2503.23507}, (opens in a new tab)" target="_blank" href="https://arxiv.org/abs/2503.23507},">https://arxiv.org/abs/2503.23507}, </a><br> +}</p> +<p>MLA:<br> +Manna, Siladittya, et al. "Federated Self-Supervised Learning for One-Shot Cross-Modal and Cross-Imaging Technique Segmentation." arXiv preprint arXiv:2503.23507 (2025).</p> +<p>APA:<br> +Manna, S., Das, S., Ghosh, S., & Bhattacharya, S. (2025). Federated Self-Supervised Learning for One-Shot Cross-Modal and Cross-Imaging Technique Segmentation. arXiv preprint arXiv:2503.23507.</p> +<p>Chicago:<br> +Manna, Siladittya, Suresh Das, Sayantari Ghosh, and Saumik Bhattacharya. "Federated Self-Supervised Learning for One-Shot Cross-Modal and Cross-Imaging Technique Segmentation." arXiv preprint arXiv:2503.23507 (2025).</p> +<p>Harvard:<br> +Manna, S., Das, S., Ghosh, S. and Bhattacharya, S., 2025. Federated Self-Supervised Learning for One-Shot Cross-Modal and Cross-Imaging Technique Segmentation. arXiv preprint arXiv:2503.23507.</p></div></div></div> \ No newline at end of file