You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: TUTORIAL.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -194,7 +194,7 @@ We address two possible versions of “finetuning” here. For both, you’ll wa
194
194
`scripts/train/`already includes some resources for supervised finetuning. If that’s what you’re interested in check out
195
195
196
196
1. [**LLM Finetuning from a Local Dataset: A Concrete Example**](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/finetune_example/README.md)
197
-
2. [The YAML which should replicate the process of creating MPT-7B-Instruct from MPT-7b](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/yamls/finetune/mpt-7b_dolly_sft.yaml) — You can point this at your own dataset by [following these instructions](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md#Usage)
197
+
2. [The YAML which would replicate the process of creating Llama-3-8b-Instruct from Llama-3-8b](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/yamls/finetune/llama-3-8b_dolly_sft.yaml) — You can point this at your own dataset by [following these instructions](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md#Usage)
198
198
199
199
### Domain Adaptation and Sequence Length Adaptation
0 commit comments