Mol-LLaMA: Towards General Understanding of Molecules in Large Molecular Language Model
Paper
•
2502.13449
•
Published
•
45
[Project Page] [Paper] [GitHub]
This repo contains the weights of Mol-LLaMA including the LoRA weights and projectors, based on meta-llama/Meta-Llama-3-8B-Instruct.
Mol-LLaMA is trained on Mol-LLaMA-Instruct, to learn the fundamental characteristics of molecules with the reasoning ability and explanbility.
Please check out the exemplar code for inference in the Github repo.
If you find our model useful, please consider citing our work.
@misc{kim2025molllama,
title={Mol-LLaMA: Towards General Understanding of Molecules in Large Molecular Language Model},
author={Dongki Kim and Wonbin Lee and Sung Ju Hwang},
year={2025},
eprint={2502.13449},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
We appreciate LLaMA, 3D-MoLM, MoleculeSTM, Uni-Mol and SciBERT for their open-source contributions.
Base model
meta-llama/Llama-3.1-8B