You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It will likely that we will want to integrate llama.cpp (or one of its available rust bindings) to our stack. It will be important to have comparison benchmarks. The following is required
Benchmark llama.cpp vs hugging face candle. Also, map which model architectures are available for each of these libraries.
(Optional) Create our own rust bindings library around llama.cpp.
Integrate llama.cpp directly into our tech stack.
The user should be able to compile the project either with llama.cpp or candle, not both. This can be achieved through a suitable feature configuration.
The text was updated successfully, but these errors were encountered:
It will likely that we will want to integrate llama.cpp (or one of its available rust bindings) to our stack. It will be important to have comparison benchmarks. The following is required
The text was updated successfully, but these errors were encountered: