Reflection Llama 3.1 70B (Correct Weights) on ZeroGPU thanks to llama.cpp and unsloth (for quantization)
ZeroGPU space
-
gokaygokay/Reflection-70B-llamacpp
- Working Model
mattshumer/ref_70_e3
- Quantized Models
unsloth/Reflection-Llama-3.1-70B-GGUF
ZeroGPU space
-
gokaygokay/Reflection-70B-llamacpp
- Working Model
mattshumer/ref_70_e3
- Quantized Models
unsloth/Reflection-Llama-3.1-70B-GGUF