|
a |
|
b/README.md |
|
|
1 |
<div class="sc-cmRAlD dkqmWS"><div class="sc-UEtKG dGqiYy sc-flttKd cguEtd"><div class="sc-fqwslf gsqkEc"><div class="sc-cBQMlg kAHhUk"><h2 class="sc-dcKlJK sc-cVttbi gqEuPW ksnHgj">About Dataset</h2></div></div></div><div class="sc-jgvlka jFuPjz"><div class="sc-gzqKSP ktvwwo"><div style="min-height: 80px;"><div class="sc-etVRix jqYJaa sc-bMmLMY ZURWJ"><p>This dataset introduces a Multi-Organ Gynaecological Magnetic Resonance Imaging for Brachytherapy-based Oncology (MOGaMBO) dataset, a novel magnetic resonance imaging dataset aimed to advance research in applications of computational intelligence in brachytherapy diagnosis and organ segmentation for cervical cancer treatment. The dataset comprises high-resolution T2-weighted 3D MR scans from 94 patients with locally advanced cervical cancer (stages IB2–IVA), adhering to FIGO guidelines for interstitial and intra-cavitary brachytherapy. The imaging was performed using a 1.5T GE Signa Explorer scanner, with acquisition parameters TR and TE set to optimal values for soft-tissue contrast at 2600ms and 155ms, respectively, combined with a pixel resolution of 0.5 × 0.5 mm² and 30–50 slices per scan. To ensure dosimetric consistency, bladder volume was standardized via Foley catheterization during imaging. The critical organs-at-risk—urinary bladder, rectum, sigmoid colon, and femoral heads, were manually contoured by expert radiation oncologists using the open-source ITK-SNAP platform, ensuring precise region-of-interest annotations. The dataset underwent rigorous deidentification to protect patient privacy, removing all demographic and identifiable information. MOGaMBO provides a standardized, privacy-compliant resource for developing and validating medical image segmentation or representation learning algorithms, and brachytherapy-related research tools. This dataset addresses a critical gap in accessible, multi-organ imaging resources for the gynaecological brachytherapy dataset, with applications in treatment planning, and AI-driven clinical research.</p> |
|
|
2 |
<p>If you use this dataset, then cite the following work in the appropriate format</p> |
|
|
3 |
<p>BibTex:</p> |
|
|
4 |
<p><a aria-label="@misc (opens in a new tab)" target="_blank" href="https://www.kaggle.com/misc" data-id="2b41b988-0043-46ac-93d3-707c73ce814a" data-user-name="misc" class="user-mention">@misc</a>{manna2025,<br> |
|
|
5 |
title={Federated Self-Supervised Learning for One-Shot Cross-Modal and Cross-Imaging Technique Segmentation}, <br> |
|
|
6 |
author={Siladittya Manna and Suresh Das and Sayantari Ghosh and Saumik Bhattacharya},<br> |
|
|
7 |
year={2025},<br> |
|
|
8 |
eprint={2503.23507},<br> |
|
|
9 |
archivePrefix={arXiv},<br> |
|
|
10 |
primaryClass={cs.CV},<br> |
|
|
11 |
url={<a rel="noreferrer nofollow" aria-label="https://arxiv.org/abs/2503.23507}, (opens in a new tab)" target="_blank" href="https://arxiv.org/abs/2503.23507},">https://arxiv.org/abs/2503.23507}, </a><br> |
|
|
12 |
}</p> |
|
|
13 |
<p>MLA:<br> |
|
|
14 |
Manna, Siladittya, et al. "Federated Self-Supervised Learning for One-Shot Cross-Modal and Cross-Imaging Technique Segmentation." arXiv preprint arXiv:2503.23507 (2025).</p> |
|
|
15 |
<p>APA:<br> |
|
|
16 |
Manna, S., Das, S., Ghosh, S., & Bhattacharya, S. (2025). Federated Self-Supervised Learning for One-Shot Cross-Modal and Cross-Imaging Technique Segmentation. arXiv preprint arXiv:2503.23507.</p> |
|
|
17 |
<p>Chicago:<br> |
|
|
18 |
Manna, Siladittya, Suresh Das, Sayantari Ghosh, and Saumik Bhattacharya. "Federated Self-Supervised Learning for One-Shot Cross-Modal and Cross-Imaging Technique Segmentation." arXiv preprint arXiv:2503.23507 (2025).</p> |
|
|
19 |
<p>Harvard:<br> |
|
|
20 |
Manna, S., Das, S., Ghosh, S. and Bhattacharya, S., 2025. Federated Self-Supervised Learning for One-Shot Cross-Modal and Cross-Imaging Technique Segmentation. arXiv preprint arXiv:2503.23507.</p></div></div></div> |