We provide links to download the raw datasets below, and we share our preprocessed python scripts in the ./scripts/preprocess/
folder. Before processing data, we need to put the downloaded compressed file into ./datasets/
and uncompress it (change the folder's name if required). You can also process the data on your own according to the instructions given by OFA . There are several useful notes below.
The pretraining datasets used in BiomedGPT are all accessible. However, you should request the access to some datasets. Here we provide the public links to these data, it is recommended that you download the data from the links first, and then process the downloaded dataset using our scripts.
- MedICat: https://github.com/allenai/medicat
- IU X-ray and Peir Gross: https://github.com/nlpaueb/bioCaption
- SLAKE: https://www.med-vqa.com/slake/
- PathVQA: https://github.com/UCSD-AI4H/PathVQA/tree/master/data
- DeepLesion: https://nihcc.app.box.com/v/DeepLesion
- OIA-DDR: https://github.com/nkicsl/DDR-dataset
- CheXpert: https://aimi.stanford.edu/chexpert-chest-x-rays
- CytoImageNet: https://github.com/stan-hua/CytoImageNet
- ISIC2020: https://challenge2020.isic-archive.com
- Retinal Fundus: https://www.kaggle.com/c/diabetic-retinopathy-detection
- PubMed Abstracts: https://github.com/ncbi-nlp/BLUE_Benchmark
- NCBI BioNLP: https://www.ncbi.nlm.nih.gov/research/bionlp/Data/
- MIMIC-III Clinic Notes: https://physionet.org/content/mimiciii/1.4/
Partial datasets are used in the pretraining
- MedMNIST v2: https://zenodo.org/record/6496656
- MeQSum: https://github.com/abachaa/MeQSum
- iCliniq and HealthCareMagic: https://github.com/UCSD-AI4H/Medical-Dialogue-System
- ROCO: https://github.com/razorx89/roco-dataset/tree/master
- VQA-RAD: https://vision.aioz.io/f/777a3737ee904924bf0d/?dl=1
- PathVQA's
trainval_ans2label.pkl
is located inPathVQA/split/qas
. - Before preprocessing the VQA-RAD dataset, it's necessary to inspect the data and search for any instances of
\t
. These instances might cause issues and it's recommended to manually remove them. For instance, changing instances likeslee\t n
tosleen
. Neglecting this step and proceeding with preprocessing could lead to errors during training. - For preprocessing the
MedMNIST
dataset, the following steps are employed: First, the.npy
files are converted to.png
images using the commandpython medmnist.py --mode 0
. Subsequently, these.png
images are converted into a.tsv
file using the command--mode 1
. - For pretraining, we provide sample codes for preprocessing image infilling, text-only, captioning, and vqa, respectively. You can process any data you want via following the logic, and remember to concatenate captioning and vqa datasets as
vision_language.tsv
. Shuffling is a good choice for pretraining, e.g.,shuf -o your_file.tsv your_file.tsv
.