This merge, this time grounded in Gemma2 9B Instruct fine... | This merge, this time grounded in Gemma2 9B Instruct fine...
This merge, this time grounded in Gemma2 9B Instruct fine-tunes, is another demonstration that models without any fine-tuning to support roleplay can still perform the function, maintaining coherence and attention to context. It should be evident that no overt fine-tuning is required for roleplay in text generation; pretraining should provide models with a requisite basic understanding of the world, so all that should be needed is some corrective fine-tuning to address observed defects in portraying the world along with datasets to promote a suitably entertaining writing style. Good Instruct tuning should promote reasoning, coherence, and attention to context.
grimjim/Kitsunebi-v1-Gemma2-8k-9B

grimjim/Kitsunebi-v1-Gemma2-8k-9B-GGUF


I opted not to incorporate the UCLA SPPO fine-tune for Gemma2 9B after observing context confusion occur with some frequency during complex scenarios.

Thanks to Axcxept co., ltd. for fine-tuning HODACHI/EZO-Common-9B-gemma-2-it, and to Princeton NLP Group for fine-tuning princeton-nlp/gemma-2-9b-it-SimPO.
AXCXEPT/EZO-Common-9B-gemma-2-it

princeton-nlp/gemma-2-9b-it-SimPO https://huggingface.co/AXCXEPT/EZO-Common-9B-gemma-2-it AXCXEPT/EZO-Common-9B-gemma-2-it · Hugging Face