Mistral-7B-Instruct-v0.2 is an instruction-tuned model optimized for generating helpful responses in conversational AI tasks. Built from Mistral-7B-v0.2, it features a 32k token context window and leverages fine-tuning techniques to improve performance in structured conversation. It is accessible via the Mistral and Hugging Face Transformers libraries.
Developed by Mistral AI, with contributions from a team of experts including Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, and others.
Mistral-7B-Instruct-v0.2 is intended for:
Mistral-7B-Instruct-v0.2 is based on the Mistral-7B architecture with enhancements including:
Mistral-7B-Instruct-v0.2 is fine-tuned on data specifically selected to enhance instruction-following capabilities. Detailed information about the dataset is available in the accompanying paper and release blog post.
The model currently lacks built-in moderation capabilities. Users are encouraged to use caution when deploying Mistral-7B-Instruct-v0.2 in sensitive environments, especially where robust content moderation is required.
Mistral AI welcomes community contributions, including improvements to the transformer tokenizer alignment, as well as ideas for implementing safety mechanisms. Contributions can be submitted via pull requests.
KeyError
transformers-v4.33.4
If you use Mistral-7B-Instruct-v0.2 in your research, please cite:
@misc{mistralai2024mistral7b, author = {Mistral AI}, title = {Mistral-7B-Instruct-v0.2: Fine-tuned Large Language Model for Instruction Following}, year = {2024}, url = {https://github.com/mistralai/mistral-models}, publisher = {Mistral AI} }