While the llama2-34b base model hasn't been released, CodeLlama2 is effectively a fine-tuned version of 34b and there are some people working with that.
As Ollama uses a llama.cpp fork on the backend, I'd expect its memory usage to be very similar to that.
https://github.com/vllm-project/vllm
https://github.com/turboderp/exllama
https://github.com/turboderp/exllamav2