Skip to content

Commit f0aa50b

Browse files
author
Vincent Chen
committed
update docs
1 parent 348b0bf commit f0aa50b

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

TUTORIAL.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -194,7 +194,7 @@ We address two possible versions of “finetuning” here. For both, you’ll wa
194194
`scripts/train/` already includes some resources for supervised finetuning. If that’s what you’re interested in check out
195195

196196
1. [**LLM Finetuning from a Local Dataset: A Concrete Example**](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/finetune_example/README.md)
197-
2. [The YAML which should replicate the process of creating MPT-7B-Instruct from MPT-7b](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/yamls/finetune/mpt-7b_dolly_sft.yaml) — You can point this at your own dataset by [following these instructions](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md#Usage)
197+
2. [The YAML which would replicate the process of creating Llama-3-8b-Instruct from Llama-3-8b](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/yamls/finetune/llama-3-8b_dolly_sft.yaml) — You can point this at your own dataset by [following these instructions](https://github.com/mosaicml/llm-foundry/blob/main/scripts/train/README.md#Usage)
198198

199199
### Domain Adaptation and Sequence Length Adaptation
200200

0 commit comments

Comments
 (0)