30s to Keras
The core data structure of Keras is a model,The simplest type of model is the Sequential model, a linear stack of layers.
Stacking layers is as easy as .add():
the first layer should specify input_dim.
Once your model looks good, configure its learning process with .compile():
If you need to, you can further configure your optimizer. A core principle of Keras is to make things reasonably simple, while allowing the user to be fully in control when they need to
通过引入Momentum可以让那些因学习率太大而来回摆动的参数,梯度能前后抵消,从而阻止发散。
You can now iterate on your training data in batches:
Alternatively, you can feed batches to your model manually:
Evaluate your performance in one line:
Or generate predictions on new data:
a densely-connected network
notice that a layer instance is callable on a tensor,and returns a tensor.
All models are callable, just like layers
With the functional API, it is easy to reuse trained models: you can treat any model as if it were a layer, by calling it on a tensor. Note that by calling a model you aren’t just reusing the architecture of the model, you are also reusing its weights.
This can allow, for instance, to quickly create models that can process sequences of inputs. You could turn an image classification model into a video classification model, in just one line.
Multi-input and multi-output models
Here’s a good use case for the functional API: models with multiple inputs and outputs. The functional API makes it easy to manipulate a large number of intertwined datastreams.
The main input will receive the headline, as a sequence of integers (each integer encodes a word). The integers will be between 1 and 10,000 (a vocabulary of 10,000 words) and the sequences will be 100 words long.
Here we insert the auxiliary loss, allowing the LSTM and Embedding layer to be trained smoothly even though the main loss will be much higher in the model.
At this point, we feed into the model our auxiliary input data by concatenating it with the LSTM output:
This defines a model with two inputs and two outputs:
We compile the model and assign a weight of 0.2 to the auxiliary loss. To specify different loss_weights or loss for each different output, you can use a list or a dictionary. Here we pass a single loss as the loss argument, so the same loss will be used on all outputs.
We can train the model by passing it lists of input arrays and target arrays:
Since our inputs and outputs are named (we passed them a “name” argument), we could also have compiled the model via:
Embedding
keras.layers.Embedding(input_dim, output_dim, embeddings_initializer=‘uniform’, embeddings_regularizer=None, activity_regularizer=None, embeddings_constraint=None, mask_zero=False, input_length=None)
This layer can only be used as the first layer in a model.
Example