You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/examples/README.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,6 +12,7 @@ We provide a set of examples to help you serve large language models, by default
12
12
-[Deploy models via TensorRT-LLM](#deploy-models-via-tensorrt-llm)
13
13
-[Deploy models via text-generation-inference](#deploy-models-via-text-generation-inference)
14
14
-[Deploy models via ollama](#deploy-models-via-ollama)
15
+
-[Speculative Decoding with llama.cpp](#speculative-decoding-with-llamacpp)
15
16
-[Speculative Decoding with vLLM](#speculative-decoding-with-vllm)
16
17
-[Multi-Host Inference](#multi-host-inference)
17
18
-[Deploy Host Models](#deploy-host-models)
@@ -59,6 +60,10 @@ By default, we use [vLLM](https://github.com/vllm-project/vllm) as the inference
59
60
60
61
[ollama](https://github.com/ollama/ollama) based on llama.cpp, aims for local deploy. see [example](./ollama/) here.
61
62
63
+
### Speculative Decoding with llama.cpp
64
+
65
+
llama.cpp supports speculative decoding to significantly improve inference performance, see [example](./speculative-decoding/llamacpp/) here.
66
+
62
67
### Speculative Decoding with vLLM
63
68
64
69
[Speculative Decoding](https://arxiv.org/abs/2211.17192) can improve inference performance efficiently, see [example](./speculative-decoding/vllm/) here.
0 commit comments