A new finetune on NousResearch/Hermes-3-Llama-3.2-3B serving as a foundation for our ongoing exploration into human-adjacent text generation.
Discord-Micae-Hermes-3-3B is a new finetune on NousResearch/Hermes-3-Llama-3.2-3B. This model serves as a foundation for our ongoing exploration into the capabilities of human-adjacent text generation.
Training took place over 17 days on a single GTX 1080 (8GB).
The model was fine-tuned on the mookiezi/Discord-OpenMicae dataset.
from transformers import pipeline
gen = pipeline("text-generation", model="mookiezi/Discord-Micae-Hermes-3-3B")
print(gen(
"<|im_start|>user\nwhat do you do?<|im_end|>\n<|im_start|>assistant\n",
max_new_tokens=100
))
Discord-Micae-Hermes-3-3B model by mookiezi
Built with ❤️ for conversational AI research