You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This script demonstrates running RamaLama with a sample workflow that pulls a model, serves it, and allows testing inference through a browser or curl.
4
4
5
-
You need to execute the ramalama.sh script in this directory.
5
+
## Requirements
6
6
7
-
RamaLama and Podman are required to run this script
7
+
-[RamaLama](https://github.com/) installed and available in your PATH
8
+
-[Podman](https://podman.io/) installed and configured
9
+
10
+
## Usage
11
+
12
+
Run the script:
13
+
14
+
```bash
15
+
./ramalama.sh
16
+
````
17
+
18
+
Override the browser (optional):
19
+
20
+
```bash
21
+
BROWSER=google-chrome ./ramalama.sh
22
+
```
23
+
24
+
## Features
25
+
26
+
* Pulls and runs the `smollm:135m` and `granite` models with RamaLama
27
+
* Opens the service endpoint in your browser automatically
28
+
* Waits for the service to be ready before testing inference
29
+
* Performs a sample inference with `curl` against the `granite3.1-dense` model
30
+
31
+
## Advanced usage
32
+
33
+
You can also call specific functions from the script directly, for example:
34
+
35
+
```bash
36
+
./ramalama.sh pull
37
+
./ramalama.sh run
38
+
./ramalama.sh test
39
+
```
40
+
41
+
Extra arguments can be passed after the functionnameif supported.
0 commit comments