@@ -582,7 +582,7 @@ def compile(self, optimizer, loss, metrics=None, loss_weights=None,
582
582
"""Configures the model for training.
583
583
584
584
# Arguments
585
- optimizer: String (name of optimizer) or optimizer object .
585
+ optimizer: String (name of optimizer) or optimizer instance .
586
586
See [optimizers](/optimizers).
587
587
loss: String (name of objective function) or objective function.
588
588
See [losses](/losses).
@@ -622,7 +622,8 @@ def compile(self, optimizer, loss, metrics=None, loss_weights=None,
622
622
a single tensor (for a single-output model), a list of tensors,
623
623
or a dict mapping output names to target tensors.
624
624
**kwargs: When using the Theano/CNTK backends, these arguments
625
- are passed into K.function. When using the TensorFlow backend,
625
+ are passed into `K.function`.
626
+ When using the TensorFlow backend,
626
627
these arguments are passed into `tf.Session.run`.
627
628
628
629
# Raises
@@ -1461,54 +1462,67 @@ def fit(self,
1461
1462
"""Trains the model for a fixed number of epochs (iterations on a dataset).
1462
1463
1463
1464
# Arguments
1464
- x: Numpy array of training data, or a list of Numpy arrays.
1465
- If the input in the model is named, you can also pass a
1466
- dictionary mapping input name to Numpy array. `x` can be
1467
- `None` (default) if feeding from framework-native tensors.
1468
- y: Numpy array of target (label) data, or a list of Numpy arrays.
1469
- If outputs in the model are named, you can also pass a
1470
- dictionary mapping output names to Numpy array. `y` can be
1471
- `None` (default) if feeding from framework-native tensors.
1465
+ x: Numpy array of training data (if the model has a single input),
1466
+ or list of Numpy arrays (if the model has multiple inputs).
1467
+ If input layers in the model are named, you can also pass a
1468
+ dictionary mapping input names to Numpy arrays.
1469
+ `x` can be `None` (default) if feeding from
1470
+ framework-native tensors (e.g. TensorFlow data tensors).
1471
+ y: Numpy array of target (label) data
1472
+ (if the model has a single output),
1473
+ or list of Numpy arrays (if the model has multiple outputs).
1474
+ If output layers in the model are named, you can also pass a
1475
+ dictionary mapping output names to Numpy arrays.
1476
+ `y` can be `None` (default) if feeding from
1477
+ framework-native tensors (e.g. TensorFlow data tensors).
1472
1478
batch_size: Integer or `None`.
1473
1479
Number of samples per gradient update.
1474
1480
If unspecified, it will default to 32.
1475
1481
epochs: Integer. Number of epochs to train the model.
1476
- Note that in conjunction with `initial_epoch`, `epochs` is to be
1477
- understood as "final epoch". The model is not trained for a
1478
- number of steps given by `epochs`, but until the epoch `epochs`
1479
- is reached.
1482
+ An epoch is an iteration over the entire `x` and `y`
1483
+ data provided.
1484
+ Note that in conjunction with `initial_epoch`,
1485
+ `epochs` is to be understood as "final epoch".
1486
+ The model is not trained for a number of iterations
1487
+ given by `epochs`, but merely until the epoch
1488
+ of index `epochs` is reached.
1480
1489
verbose: 0, 1, or 2. Verbosity mode.
1481
1490
0 = silent, 1 = progress bar, 2 = one line per epoch.
1482
1491
callbacks: List of `keras.callbacks.Callback` instances.
1483
1492
List of callbacks to apply during training.
1484
1493
See [callbacks](/callbacks).
1485
- validation_split: Float between 0 and 1:
1494
+ validation_split: Float between 0 and 1.
1486
1495
Fraction of the training data to be used as validation data.
1487
1496
The model will set apart this fraction of the training data,
1488
1497
will not train on it, and will evaluate
1489
1498
the loss and any model metrics
1490
1499
on this data at the end of each epoch.
1500
+ The validation data is selected from the last samples
1501
+ in the `x` and `y` data provided, before shuffling.
1491
1502
validation_data: tuple `(x_val, y_val)` or tuple
1492
1503
`(x_val, y_val, val_sample_weights)` on which to evaluate
1493
1504
the loss and any model metrics at the end of each epoch.
1494
1505
The model will not be trained on this data.
1495
- Will override `validation_split`.
1506
+ This will override `validation_split`.
1496
1507
shuffle: Boolean (whether to shuffle the training data
1497
1508
before each epoch) or str (for 'batch').
1498
1509
'batch' is a special option for dealing with the
1499
1510
limitations of HDF5 data; it shuffles in batch-sized chunks.
1500
1511
Has no effect when `steps_per_epoch` is not `None`.
1501
1512
class_weight: Optional dictionary mapping class indices (integers)
1502
1513
to a weight (float) value, used for weighting the loss function
1503
- (during training only). This can be useful to tell the model to
1504
- "pay more attention" to samples from an under-represented class.
1514
+ (during training only).
1515
+ This can be useful to tell the model to
1516
+ "pay more attention" to samples from
1517
+ an under-represented class.
1505
1518
sample_weight: Optional Numpy array of weights for
1506
1519
the training samples, used for weighting the loss function
1507
1520
(during training only). You can either pass a flat (1D)
1508
1521
Numpy array with the same length as the input samples
1509
1522
(1:1 mapping between weights and samples),
1510
1523
or in the case of temporal data,
1511
- you can pass a 2D array with shape `(samples, sequence_length)`,
1524
+ you can pass a 2D array with shape
1525
+ `(samples, sequence_length)`,
1512
1526
to apply a different weight to every timestep of every sample.
1513
1527
In this case you should make sure to specify
1514
1528
`sample_weight_mode="temporal"` in `compile()`.
0 commit comments