📖 StoryMixtral Zero

Generate stories with the Mixtral-inspired MoE model, deployed with ZeroGPU.

  • ZeroGPU: Dynamic NVIDIA GPU allocation (120s)
  • Pretrained weights pulled from Hugging Face Hub
  • Top-k sampling with temperature
10 512
1 100
0.1 2

ℹ️ Notes

  • Weights are downloaded from the HF repo YuvrajSingh9886/StoryMixtral at first use.
  • GPU is allocated only during generation calls via ZeroGPU.