Skip to content

Commit 7c9fd38

Browse files
committed
Fix white space
Signed-off-by: Yuanyuan Chen <[email protected]>
1 parent 53838ed commit 7c9fd38

File tree

136 files changed

+323
-205
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

136 files changed

+323
-205
lines changed

CONTRIBUTING.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -278,13 +278,14 @@ are working on it).<br>
278278
useful to avoid duplicated work, and to differentiate it from PRs ready to be merged.<br>
279279
☐ Make sure existing tests pass.<br>
280280
☐ If adding a new feature, also add tests for it.<br>
281-
- If you are adding a new model, make sure you use
281+
282+
- If you are adding a new model, make sure you use
282283
`ModelTester.all_model_classes = (MyModel, MyModelWithLMHead,...)` to trigger the common tests.
283-
- If you are adding new `@slow` tests, make sure they pass using
284+
- If you are adding new `@slow` tests, make sure they pass using
284285
`RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py`.
285-
- If you are adding a new tokenizer, write tests and make sure
286+
- If you are adding a new tokenizer, write tests and make sure
286287
`RUN_SLOW=1 python -m pytest tests/models/{your_model_name}/test_tokenization_{your_model_name}.py` passes.
287-
- CircleCI does not run the slow tests, but GitHub Actions does every night!<br>
288+
- CircleCI does not run the slow tests, but GitHub Actions does every night!<br>
288289

289290
☐ All public methods must have informative docstrings (see
290291
[`modeling_bert.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py)
@@ -340,6 +341,7 @@ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/t
340341
```
341342

342343
Like the slow tests, there are other environment variables available which are not enabled by default during testing:
344+
343345
- `RUN_CUSTOM_TOKENIZERS`: Enables tests for custom tokenizers.
344346

345347
More environment variables and additional information can be found in the [testing_utils.py](https://github.com/huggingface/transformers/blob/main/src/transformers/testing_utils.py).

docs/source/en/attention_interface.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -193,4 +193,4 @@ def custom_attention_mask(
193193

194194
It mostly works thanks to the `mask_function`, which is a `Callable` in the form of [torch's mask_mod functions](https://pytorch.org/blog/flexattention/), taking 4 indices as input and returning a boolean to indicate if this position should take part in the attention computation.
195195

196-
If you cannot use the `mask_function` to create your mask for some reason, you can try to work around it by doing something similar to our [torch export workaround](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/executorch.py).
196+
If you cannot use the `mask_function` to create your mask for some reason, you can try to work around it by doing something similar to our [torch export workaround](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/executorch.py).

docs/source/en/auto_docstring.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -210,9 +210,9 @@ There are some rules for documenting different types of arguments and they're li
210210
This can span multiple lines.
211211
```
212212

213-
* Include `type` in backticks.
214-
* Add *optional* if the argument is not required or has a default value.
215-
* Add "defaults to X" if it has a default value. You don't need to add "defaults to `None`" if the default value is `None`.
213+
* Include `type` in backticks.
214+
* Add *optional* if the argument is not required or has a default value.
215+
* Add "defaults to X" if it has a default value. You don't need to add "defaults to `None`" if the default value is `None`.
216216

217217
These arguments can also be passed to `@auto_docstring` as a `custom_args` argument. It is used to define the docstring block for new arguments once if they are repeated in multiple places in the modeling file.
218218

docs/source/en/cache_explanation.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -162,6 +162,7 @@ generated_ids = model.generate(**inputs, use_cache=True, max_new_tokens=10)
162162
Before the [`Cache`] class, the cache used to be stored as a tuple of tuples of tensors. This format is dynamic because it grows as text is generated, similar to [`DynamicCache`].
163163

164164
The legacy format is essentially the same data structure but organized differently.
165+
165166
- It's a tuple of tuples, where each inner tuple contains the key and value tensors for a layer.
166167
- The tensors have the same shape `[batch_size, num_heads, seq_len, head_dim]`.
167168
- The format is less flexible and doesn't support features like quantization or offloading.

docs/source/en/chat_extras.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -221,4 +221,4 @@ model_input = tokenizer.apply_chat_template(
221221
messages,
222222
tools = [current_time, multiply]
223223
)
224-
```
224+
```

docs/source/en/chat_templating.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -77,9 +77,9 @@ Mistral-7B-Instruct uses `[INST]` and `[/INST]` tokens to indicate the start and
7777

7878
The input to `apply_chat_template` should be structured as a list of dictionaries with `role` and `content` keys. The `role` key specifies the speaker, and the `content` key contains the message. The common roles are:
7979

80-
- `user` for messages from the user
81-
- `assistant` for messages from the model
82-
- `system` for directives on how the model should act (usually placed at the beginning of the chat)
80+
- `user` for messages from the user
81+
- `assistant` for messages from the model
82+
- `system` for directives on how the model should act (usually placed at the beginning of the chat)
8383

8484
[`apply_chat_template`] takes this list and returns a formatted sequence. Set `tokenize=True` if you want to tokenize the sequence.
8585

docs/source/en/cursor.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ where `port` is the port used by `transformers serve` (`8000` by default). On th
2121
</h3>
2222

2323
You're now ready to set things up on the app side! In Cursor, while you can't set a new provider, you can change the endpoint for OpenAI requests in the model selection settings. First, navigate to "Settings" > "Cursor Settings", "Models" tab, and expand the "API Keys" collapsible. To set your `transformers serve` endpoint, follow this order:
24+
2425
1. Unselect ALL models in the list above (e.g. `gpt4`, ...);
2526
2. Add and select the model you want to use (e.g. `Qwen/Qwen3-4B`)
2627
3. Add some random text to OpenAI API Key. This field won't be used, but it can't be empty;

docs/source/en/generation_strategies.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -229,6 +229,7 @@ tokenizer.batch_decode(outputs, skip_special_tokens=True)
229229
## Custom generation methods
230230

231231
Custom generation methods enable specialized behavior such as:
232+
232233
- have the model continue thinking if it is uncertain;
233234
- roll back generation if the model gets stuck;
234235
- handle special tokens with custom logic;
@@ -301,6 +302,7 @@ Updating your Python requirements accordingly will remove this error message.
301302
### Creating a custom generation method
302303

303304
To create a new generation method, you need to create a new [**Model**](https://huggingface.co/new) repository and push a few files into it.
305+
304306
1. The model you've designed your generation method with.
305307
2. `custom_generate/generate.py`, which contains all the logic for your custom generation method.
306308
3. `custom_generate/requirements.txt`, used to optionally add new Python requirements and/or lock specific versions to correctly use your method.
@@ -377,6 +379,7 @@ def generate(model, input_ids, generation_config=None, left_padding=None, **kwar
377379
```
378380

379381
Follow the recommended practices below to ensure your custom generation method works as expected.
382+
380383
- Feel free to reuse the logic for validation and input preparation in the original [`~GenerationMixin.generate`].
381384
- Pin the `transformers` version in the requirements if you use any private method/attribute in `model`.
382385
- Consider adding model validation, input validation, or even a separate test file to help users sanity-check your code in their environment.
@@ -410,6 +413,7 @@ tags:
410413
```
411414

412415
Recommended practices:
416+
413417
- Document input and output differences in [`~GenerationMixin.generate`].
414418
- Add self-contained examples to enable quick experimentation.
415419
- Describe soft-requirements such as if the method only works well with a certain family of models.
@@ -442,6 +446,7 @@ output = model.generate(
442446
### Finding custom generation methods
443447

444448
You can find all custom generation methods by [searching for their custom tag.](https://huggingface.co/models?other=custom_generate), `custom_generate`. In addition to the tag, we curate two collections of `custom_generate` methods:
449+
445450
- [Custom generation methods - Community](https://huggingface.co/collections/transformers-community/custom-generation-methods-community-6888fb1da0efbc592d3a8ab6) -- a collection of powerful methods contributed by the community;
446451
- [Custom generation methods - Tutorials](https://huggingface.co/collections/transformers-community/custom-generation-methods-tutorials-6823589657a94940ea02cfec) -- a collection of reference implementations for methods that previously were part of `transformers`, as well as tutorials for `custom_generate`.
447452

docs/source/en/glossary.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -185,9 +185,9 @@ See the [Fine-tune a pretrained model](https://huggingface.co/docs/transformers/
185185

186186
The model head refers to the last layer of a neural network that accepts the raw hidden states and projects them onto a different dimension. There is a different model head for each task. For example:
187187

188-
* [`GPT2ForSequenceClassification`] is a sequence classification head - a linear layer - on top of the base [`GPT2Model`].
189-
* [`ViTForImageClassification`] is an image classification head - a linear layer on top of the final hidden state of the `CLS` token - on top of the base [`ViTModel`].
190-
* [`Wav2Vec2ForCTC`] is a language modeling head with [CTC](#connectionist-temporal-classification-ctc) on top of the base [`Wav2Vec2Model`].
188+
* [`GPT2ForSequenceClassification`] is a sequence classification head - a linear layer - on top of the base [`GPT2Model`].
189+
* [`ViTForImageClassification`] is an image classification head - a linear layer on top of the final hidden state of the `CLS` token - on top of the base [`ViTModel`].
190+
* [`Wav2Vec2ForCTC`] is a language modeling head with [CTC](#connectionist-temporal-classification-ctc) on top of the base [`Wav2Vec2Model`].
191191

192192
## I
193193

docs/source/en/how_to_hack_models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -149,4 +149,4 @@ Call [print_trainable_parameters](https://huggingface.co/docs/peft/package_refer
149149
```py
150150
model.print_trainable_parameters()
151151
"trainable params: 589,824 || all params: 94,274,096 || trainable%: 0.6256"
152-
```
152+
```

0 commit comments

Comments
 (0)