Skip to content

Commit d4a14ee

Browse files
committed
Docstring improvements
1 parent d912866 commit d4a14ee

File tree

2 files changed

+64
-39
lines changed

2 files changed

+64
-39
lines changed

keras/engine/training.py

Lines changed: 33 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -582,7 +582,7 @@ def compile(self, optimizer, loss, metrics=None, loss_weights=None,
582582
"""Configures the model for training.
583583
584584
# Arguments
585-
optimizer: String (name of optimizer) or optimizer object.
585+
optimizer: String (name of optimizer) or optimizer instance.
586586
See [optimizers](/optimizers).
587587
loss: String (name of objective function) or objective function.
588588
See [losses](/losses).
@@ -622,7 +622,8 @@ def compile(self, optimizer, loss, metrics=None, loss_weights=None,
622622
a single tensor (for a single-output model), a list of tensors,
623623
or a dict mapping output names to target tensors.
624624
**kwargs: When using the Theano/CNTK backends, these arguments
625-
are passed into K.function. When using the TensorFlow backend,
625+
are passed into `K.function`.
626+
When using the TensorFlow backend,
626627
these arguments are passed into `tf.Session.run`.
627628
628629
# Raises
@@ -1461,54 +1462,67 @@ def fit(self,
14611462
"""Trains the model for a fixed number of epochs (iterations on a dataset).
14621463
14631464
# Arguments
1464-
x: Numpy array of training data, or a list of Numpy arrays.
1465-
If the input in the model is named, you can also pass a
1466-
dictionary mapping input name to Numpy array. `x` can be
1467-
`None` (default) if feeding from framework-native tensors.
1468-
y: Numpy array of target (label) data, or a list of Numpy arrays.
1469-
If outputs in the model are named, you can also pass a
1470-
dictionary mapping output names to Numpy array. `y` can be
1471-
`None` (default) if feeding from framework-native tensors.
1465+
x: Numpy array of training data (if the model has a single input),
1466+
or list of Numpy arrays (if the model has multiple inputs).
1467+
If input layers in the model are named, you can also pass a
1468+
dictionary mapping input names to Numpy arrays.
1469+
`x` can be `None` (default) if feeding from
1470+
framework-native tensors (e.g. TensorFlow data tensors).
1471+
y: Numpy array of target (label) data
1472+
(if the model has a single output),
1473+
or list of Numpy arrays (if the model has multiple outputs).
1474+
If output layers in the model are named, you can also pass a
1475+
dictionary mapping output names to Numpy arrays.
1476+
`y` can be `None` (default) if feeding from
1477+
framework-native tensors (e.g. TensorFlow data tensors).
14721478
batch_size: Integer or `None`.
14731479
Number of samples per gradient update.
14741480
If unspecified, it will default to 32.
14751481
epochs: Integer. Number of epochs to train the model.
1476-
Note that in conjunction with `initial_epoch`, `epochs` is to be
1477-
understood as "final epoch". The model is not trained for a
1478-
number of steps given by `epochs`, but until the epoch `epochs`
1479-
is reached.
1482+
An epoch is an iteration over the entire `x` and `y`
1483+
data provided.
1484+
Note that in conjunction with `initial_epoch`,
1485+
`epochs` is to be understood as "final epoch".
1486+
The model is not trained for a number of iterations
1487+
given by `epochs`, but merely until the epoch
1488+
of index `epochs` is reached.
14801489
verbose: 0, 1, or 2. Verbosity mode.
14811490
0 = silent, 1 = progress bar, 2 = one line per epoch.
14821491
callbacks: List of `keras.callbacks.Callback` instances.
14831492
List of callbacks to apply during training.
14841493
See [callbacks](/callbacks).
1485-
validation_split: Float between 0 and 1:
1494+
validation_split: Float between 0 and 1.
14861495
Fraction of the training data to be used as validation data.
14871496
The model will set apart this fraction of the training data,
14881497
will not train on it, and will evaluate
14891498
the loss and any model metrics
14901499
on this data at the end of each epoch.
1500+
The validation data is selected from the last samples
1501+
in the `x` and `y` data provided, before shuffling.
14911502
validation_data: tuple `(x_val, y_val)` or tuple
14921503
`(x_val, y_val, val_sample_weights)` on which to evaluate
14931504
the loss and any model metrics at the end of each epoch.
14941505
The model will not be trained on this data.
1495-
Will override `validation_split`.
1506+
This will override `validation_split`.
14961507
shuffle: Boolean (whether to shuffle the training data
14971508
before each epoch) or str (for 'batch').
14981509
'batch' is a special option for dealing with the
14991510
limitations of HDF5 data; it shuffles in batch-sized chunks.
15001511
Has no effect when `steps_per_epoch` is not `None`.
15011512
class_weight: Optional dictionary mapping class indices (integers)
15021513
to a weight (float) value, used for weighting the loss function
1503-
(during training only). This can be useful to tell the model to
1504-
"pay more attention" to samples from an under-represented class.
1514+
(during training only).
1515+
This can be useful to tell the model to
1516+
"pay more attention" to samples from
1517+
an under-represented class.
15051518
sample_weight: Optional Numpy array of weights for
15061519
the training samples, used for weighting the loss function
15071520
(during training only). You can either pass a flat (1D)
15081521
Numpy array with the same length as the input samples
15091522
(1:1 mapping between weights and samples),
15101523
or in the case of temporal data,
1511-
you can pass a 2D array with shape `(samples, sequence_length)`,
1524+
you can pass a 2D array with shape
1525+
`(samples, sequence_length)`,
15121526
to apply a different weight to every timestep of every sample.
15131527
In this case you should make sure to specify
15141528
`sample_weight_mode="temporal"` in `compile()`.

keras/models.py

Lines changed: 31 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -784,16 +784,17 @@ def compile(self, optimizer, loss,
784784
dictionary or a list of modes.
785785
weighted_metrics: List of metrics to be evaluated and weighted
786786
by sample_weight or class_weight during training and testing.
787-
target_tensors: By default, Keras will create placeholders for the
787+
target_tensors: By default, Keras will create a placeholder for the
788788
model's target, which will be fed with the target data during
789789
training. If instead you would like to use your own
790-
target tensors (in turn, Keras will not expect external
790+
target tensor (in turn, Keras will not expect external
791791
Numpy data for these targets at training time), you
792-
can specify them via the `target_tensors` argument. It can be
793-
a single tensor (for a single-output model), a list of tensors,
794-
or a dict mapping output names to target tensors.
792+
can specify them via the `target_tensors` argument.
793+
It should be a single tensor
794+
(for a single-output `Sequential` model).
795795
**kwargs: When using the Theano/CNTK backends, these arguments
796-
are passed into K.function. When using the TensorFlow backend,
796+
are passed into `K.function`.
797+
When using the TensorFlow backend,
797798
these arguments are passed into `tf.Session.run`.
798799
799800
# Raises
@@ -850,21 +851,26 @@ def fit(self,
850851
851852
# Arguments
852853
x: Numpy array of training data.
853-
If the input in the model is named, you can also pass a
854-
dictionary mapping the input name to a Numpy array. `x` can be
855-
`None` (default) if feeding from framework-native tensors.
854+
If the input layer in the model is named, you can also pass a
855+
dictionary mapping the input name to a Numpy array.
856+
`x` can be `None` (default) if feeding from
857+
framework-native tensors (e.g. TensorFlow data tensors).
856858
y: Numpy array of target (label) data.
857-
If the output in the model is named, you can also pass a
858-
dictionary mapping the output name to a Numpy array. `y` can be
859-
`None` (default) if feeding from framework-native tensors.
859+
If the output layer in the model is named, you can also pass a
860+
dictionary mapping the output name to a Numpy array.
861+
`y` can be `None` (default) if feeding from
862+
framework-native tensors (e.g. TensorFlow data tensors).
860863
batch_size: Integer or `None`.
861864
Number of samples per gradient update.
862865
If unspecified, it will default to 32.
863866
epochs: Integer. Number of epochs to train the model.
864-
Note that in conjunction with `initial_epoch`, `epochs` is to be
865-
understood as "final epoch". The model is not trained for a
866-
number of steps given by `epochs`, but until the epoch `epochs`
867-
is reached.
867+
An epoch is an iteration over the entire `x` and `y`
868+
data provided.
869+
Note that in conjunction with `initial_epoch`,
870+
`epochs` is to be understood as "final epoch".
871+
The model is not trained for a number of iterations
872+
given by `epochs`, but merely until the epoch
873+
of index `epochs` is reached.
868874
verbose: 0, 1, or 2. Verbosity mode.
869875
0 = silent, 1 = progress bar, 2 = one line per epoch.
870876
callbacks: List of `keras.callbacks.Callback` instances.
@@ -876,27 +882,32 @@ def fit(self,
876882
will not train on it, and will evaluate
877883
the loss and any model metrics
878884
on this data at the end of each epoch.
885+
The validation data is selected from the last samples
886+
in the `x` and `y` data provided, before shuffling.
879887
validation_data: tuple `(x_val, y_val)` or tuple
880888
`(x_val, y_val, val_sample_weights)` on which to evaluate
881889
the loss and any model metrics at the end of each epoch.
882890
The model will not be trained on this data.
883-
Will override `validation_split`.
891+
This will override `validation_split`.
884892
shuffle: Boolean (whether to shuffle the training data
885893
before each epoch) or str (for 'batch').
886894
'batch' is a special option for dealing with the
887895
limitations of HDF5 data; it shuffles in batch-sized chunks.
888896
Has no effect when `steps_per_epoch` is not `None`.
889897
class_weight: Optional dictionary mapping class indices (integers)
890898
to a weight (float) value, used for weighting the loss function
891-
(during training only). This can be useful to tell the model to
892-
"pay more attention" to samples from an under-represented class.
899+
(during training only).
900+
This can be useful to tell the model to
901+
"pay more attention" to samples from
902+
an under-represented class.
893903
sample_weight: Optional Numpy array of weights for
894904
the training samples, used for weighting the loss function
895905
(during training only). You can either pass a flat (1D)
896906
Numpy array with the same length as the input samples
897907
(1:1 mapping between weights and samples),
898908
or in the case of temporal data,
899-
you can pass a 2D array with shape `(samples, sequence_length)`,
909+
you can pass a 2D array with shape
910+
`(samples, sequence_length)`,
900911
to apply a different weight to every timestep of every sample.
901912
In this case you should make sure to specify
902913
`sample_weight_mode="temporal"` in `compile()`.

0 commit comments

Comments
 (0)