内容来自Datacamp相关课程:https://app.datacamp.com/learn/courses/introduction-to-deep-learning-with-pytorch

Forward Pass

What is a forward pass?

  • input data is passed forwarf or propagated through a network
  • computations performed at each layer
  • outputs of each layer passed to each subsequent layer
  • output of final layer: "prediction"
  • used for both training and prediction

Some possible outputs:

  • Binary classification
    • dingle probability between 0 and 1
  • Multiclass classification
    • distribution of probabilities summing to 1
  • Regression values
    • continuous numerical predictions

Backward Pass

  • Backward pass, or backpropagation is used to update weights and biases during training
  • In the "training loop", we:
    1. Propagate data forward
    2. Compare outputs to true values (ground-truth)
    3. Backpropagate to update model weights and biases
    4. Repeat until weigths and biases are tuned to produce useful outputs

代码示例

Binary classification: forward pass

input_data = torch.trnsor(...)

# create binary classification model
model = nn.Sequential(
    nn.Linear(6, 4),
    nn.Linear(4, 1),
    nn.Sigmoid()
)

# pass input data through model
output = model(input_data)

Multi-class classification: forward pass

n_classes = 3

# create multi-class classification model
model = nn.Sequential(
    nn.Linear(6, 4),
    nn.Linear(4, n_classes),
    nn.Softmax(dim=-1) # indicate the samples have the same last dimension as the last linear layer's output
)

# pass input data through model
output = model(input_data)

Loss Functions

Why do we need a loss function?

  • gives feedback to model during training
  • takes in model prediction \(\hat{y}\) and ground truth \(y\)
  • outputs a float
最后修改日期: 2024年 3月 5日

作者

留言

撰写回覆或留言

发布留言必须填写的电子邮件地址不会公开。