DeepSeek in k8s

We have 6 H100 servers, as you know each one have 8 (80GB)GPUs, The biggest model we have is llama 405B which takes 8 GPU's which fits one server. now deep seek needs more than 8 GPU's does anybody know a way to run the model on multiple H100 servers? ps: our stack fully k8s