Scentience Model Cards


The Scentience Olfaction-Vision-Language Model (OVLM) was trained on multimodal data for the purposes of associating chemical compounds to visual objects, with language acting as the latent bridge. This card provides detailed information about each its purpose, performance, and usage guidelines.

Please note that all Scentience machine learning models are for research purposes only. Scentience does not claim any specific performance beyond the model cards nor for any specific applications.

For more information on Scentience privacy and data policies, please observe the Scentience Privacy Policy.


Olfaction-Vision-Language Model

Intended Use

Inputs & Outputs

Performance

Ethical Considerations

Caveats & Recommendations

Citation

If you use this model in your research, please cite:

@misc{scentience2025ovlm,
title={Scentience Olfaction-Vision-Language Model},
author={Scentience Robotics, LLC},
year={2025},
note={Version 0.3}
}