model.add(LSTM(32, input_shape=(10, 64)))
這是因為LSTM是keras.engine.base_layer.wrapped_fn()抽像類別的子類別,所有的循環層(LSTM,GRU,SimpleRNN)都繼承本層,因此下面的參數可以在任何循環層中使用。
- cell: A RNN cell instance. A RNN cell is a class that has:
- a
call(input_at_t, states_at_t)
method, returning(output_at_t, states_at_t_plus_1)
. The call method of the cell can also take the optional argumentconstants
, see section "Note on passing external constants" below. - a
state_size
attribute. This can be a single integer (single state) in which case it is the size of the recurrent state (which should be the same as the size of the cell output). This can also be a list/tuple of integers (one size per state). - a
output_size
attribute. This can be a single integer or a TensorShape, which represent the shape of the output. For backward compatible reason, if this attribute is not available for the cell, the value will be inferred by the first element of thestate_size
.
It is also possible forcell
to be a list of RNN cell instances, in which cases the cells get stacked one after the other in the RNN, implementing an efficient stacked RNN. - a
- return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
- return_state: Boolean. Whether to return the last state in addition to the output.
- go_backwards: Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
- stateful: Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
- unroll: Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
- input_dim: dimensionality of the input (integer). This argument (or alternatively, the keyword argument
input_shape
) is required when using this layer as the first layer in a model. - input_length: Length of input sequences, to be specified when it is constant. This argument is required if you are going to connect
Flatten
thenDense
layers upstream (without it, the shape of the dense outputs cannot be computed). Note that if the recurrent layer is not the first layer in your model, you would need to specify the input length at the level of the first layer (e.g. via theinput_shape
argument)
參考
https://keras.io/layers/recurrent/
https://keras-cn.readthedocs.io/en/latest/layers/recurrent_layer/
沒有留言:
張貼留言