The misc/llama.cpp port
llama.cpp-0.0.6641 – LLM inference system (cvsweb github mirror)
Description
Inference of Meta's LLaMA model (and others) in pure C/C++ with minimal setup and state-of-the-art performance on a wide range of hardwareWWW: https://github.com/ggml-org/llama.cpp
Maintainer
The OpenBSD ports mailing-list
Only for arches
aarch64 alpha amd64 arm hppa i386 mips64 mips64el powerpc powerpc64 riscv64 sparc64
Categories
Library dependencies
Build dependencies
Files
- /usr/local/bin/convert_hf_to_gguf.py
- /usr/local/bin/llama-batched
- /usr/local/bin/llama-batched-bench
- /usr/local/bin/llama-bench
- /usr/local/bin/llama-cli
- /usr/local/bin/llama-diffusion-cli
- /usr/local/bin/llama-embedding
- /usr/local/bin/llama-eval-callback
- /usr/local/bin/llama-finetune
- /usr/local/bin/llama-gen-docs
- /usr/local/bin/llama-gguf
- /usr/local/bin/llama-gguf-hash
- /usr/local/bin/llama-gguf-split
- /usr/local/bin/llama-imatrix
- /usr/local/bin/llama-logits
- /usr/local/bin/llama-lookahead
- /usr/local/bin/llama-lookup
- /usr/local/bin/llama-lookup-create
- /usr/local/bin/llama-lookup-merge
- /usr/local/bin/llama-lookup-stats
- /usr/local/bin/llama-mtmd-cli
- /usr/local/bin/llama-parallel
- /usr/local/bin/llama-passkey
- /usr/local/bin/llama-perplexity
- /usr/local/bin/llama-quantize
- /usr/local/bin/llama-retrieval
- /usr/local/bin/llama-run
- /usr/local/bin/llama-save-load-state
- /usr/local/bin/llama-server
- /usr/local/bin/llama-simple
- /usr/local/bin/llama-simple-chat
- /usr/local/bin/llama-speculative
- /usr/local/bin/llama-speculative-simple
- /usr/local/bin/llama-tokenize
- /usr/local/bin/llama-tts
- /usr/local/include/llama-cpp.h
- /usr/local/include/llama.h
- /usr/local/include/mtmd-helper.h
- /usr/local/include/mtmd.h
- /usr/local/lib/cmake/llama/
- /usr/local/lib/cmake/llama/llama-config.cmake
- /usr/local/lib/cmake/llama/llama-version.cmake
- /usr/local/lib/libllama.so.2.0
- /usr/local/lib/libmtmd.so.0.0
- /usr/local/lib/pkgconfig/llama.pc