llm-cli looks like it loads model files and it doesn't help with model development. It runs GGML model files. The models aren't written in Rust. Besides the point, GGUF is successor to GGML. There's a variety of ways to convert Pytorch, Keras, etc models to GGML or GGUF.
I dunno, maybe we're talking about different things. I'm saying it's better to do model development in a high level language and then export the training or runtime to a lower level framework, multiple of which exist and have existed. It's becoming simpler to use low-level runtimes (llama.cpp vs Tensorflow). Is that the point you're making?
Sure, and it's only a simple 20 step process that involves building Tensorflow from source. Yeay!
https://medium.com/@hamedmp/exporting-trained-tensorflow-mod...
Let me see what the process for compiling a LLM written in Rust is....
https://github.com/rustformers/llm
Oh look, it doesn't make me immediately want to give up.