• <noscript id="e0iig"><kbd id="e0iig"></kbd></noscript>
  • <td id="e0iig"></td>
  • <option id="e0iig"></option>
  • <noscript id="e0iig"><source id="e0iig"></source></noscript>
  • Pytorch-18種經典的損失函數

    標簽: Pytorch  深度學習  損失函數  

    一、18種損失函數

    目錄

    一、18種損失函數

    1、nn.CrossEntropyLoss(交叉熵損失)

    2、nn.NLLLoss

    3、nn.BCELoss

    4、nn.BCEWithLogitsLoss

    5、nn.L1Loss

    6、nn.MSELoss

    7、nn.SmoothL1Loss

    8、PoissonNLLLoss

    9、nn.KLDivLoss

    10、nn.MarginRankingLoss

    11、nn.MultiLabelMarginLoss

    12、nn.SoftMarginLoss


    損失函數:衡量模型輸出與真實標簽的差異;

    目標函數 = 代價函數 + 正則化項(L1或者L2等,懲罰項)

    1、nn.CrossEntropyLoss(交叉熵損失)

    (1)什么是熵?:熵是信息論中最基本、最核心的一個概念,它衡量了一個概率分布的隨機程度,或者說包含的信息量的大小。

    具體公式推導可以參考這篇博主的講解:一文搞懂交叉損失,講解淺顯易懂!

    (2)功能: nn.LogSoftmax ()與nn.NLLLoss ()結合,進行交叉熵計算
     

    nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')

    主要參數:
    ? weight:各類別的loss設置權值
    ? ignore _index:忽略某個類別
    ? reduction :計算模式,可為none/sum /mean
        none- 逐個元素計算
        sum- 所有元素求和,返回標量
        mean- 加權平均,返回標量
     
    其中交叉熵損失數學公式計算如下:

    \text{loss}(x, class) = -\log\left(\frac{\exp(x[class])}{\sum_j \exp(x[j])}\right) | = -x[class] + \log\left(\sum_j \exp(x[j])\right)                         沒有weight的

    \text{loss}(x, class) = weight[class] \left(-x[class] + \log\left(\sum_j \exp(x[j])\right)\right)                                        有weight的

    (3)代碼實現:對應無參數weight的,不同的計算模式下的結果!

    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import numpy as np
    
    # 輸入數據
    inputs = torch.tensor([[1,2],[1,3],[1,3]],dtype=torch.float)
    # print(inputs)
    # 對應輸入數據的標簽
    target = torch.tensor([0,1,1],dtype=torch.long)
    # print(target)
    print("數據:{0},\n標簽:{1}".format(inputs,target))
    數據:tensor([[1., 2.],
            [1., 3.],
            [1., 3.]]),
    標簽:tensor([0, 1, 1])
    # CrossEntropy loss: reduction
    """
        reduction有三種計算模式:
        1、none-逐個元素計算
        2、sum-所有元素求和,返回標量
        3、mean-加權平均,返回標量
    """
    # flag = 0
    flag = 1
    if flag:
        # 定義損失函數
        loss_f_none = nn.CrossEntropyLoss(weight=None,reduction='none')
        loss_f_sum = nn.CrossEntropyLoss(weight=None,reduction='sum')
        loss_f_mean = nn.CrossEntropyLoss(weight=None,reduction='mean')
        
        # 前向傳播
        loss_none = loss_f_none(inputs,target)
        loss_sum = loss_f_sum(inputs,target)
        loss_mean = loss_f_mean(inputs,target)
        
        # 查看輸出結果
        print("Cross Entropy Loss:\n",loss_none)
        print("Cross Entropy Loss:\n",loss_sum)
        print("Cross Entropy Loss:\n",loss_mean)
        
    # 手動對上述進行驗證
    # flag = 0
    flag = 1
    if flag:
        idx = 0
        # 將tensor轉換為數組
        input_1 = inputs.detach().numpy()[idx] # [1,2]數據
        target_1 = target.numpy()[idx] # 0標簽
        
        # 第一項
        x_class = input_1[target_1]
        print(x_class)
        
        # 第二項
        sigma_exp_x = np.sum(list(map(np.exp,input_1)))
        log_sigma_exp_x = np.log(sigma_exp_x)
        
        # 輸出loss
        loss_1 = -x_class + log_sigma_exp_x
        print("第一個樣本loss為:",loss_1)
    Cross Entropy Loss:
     tensor([1.3133, 0.1269, 0.1269])
    Cross Entropy Loss:
     tensor(1.5671)
    Cross Entropy Loss:
     tensor(0.5224)
    1.0
    第一個樣本loss為: 1.3132617

    根據系統提供,然后按照手動公式計算,結果是一致的!

    # CrossEntropyLoss-------------------有weight參數的------------
    # flag = 0
    flag = 1
    if flag:
        # 定義損失函數
        weights = torch.tensor([1,2],dtype=torch.float)
        loss_f_none_w = nn.CrossEntropyLoss(weight=weights,reduction='none')
        loss_f_sum = nn.CrossEntropyLoss(weight=weights,reduction='sum')
        loss_f_mean = nn.CrossEntropyLoss(weight=weights,reduction='mean')
        
        loss_none_w = loss_f_none_w(inputs,target)
        loss_sum = loss_f_sum(inputs,target)
        loss_mean = loss_f_mean(inputs,target)
        
        print("權重參數:{}".format(weights))
        print("loss_none_w:{}".format(loss_none_w))
        print("loss_sum:{}".format(loss_sum))
        print("loss_mean:{}".format(loss_mean))
    
    # 手動計算驗證
    # flag = 0
    flag = 1
    if flag:
        weights = torch.tensor([1,2],dtype=torch.float)
        weights_all = np.sum(list(map(lambda x: weights.numpy()[x],target.numpy())))
        
        mean = 0
        loss_sep = loss_none.detach().numpy()
    #     print(loss_sep)
        
        for i in range(target.shape[0]):
            x_class = target.numpy()[i]
            tmp = loss_sep[i]*(weights.numpy()[x_class] / weights_all)
            mean += tmp
            
        print("loss_mean_by_hand:{}".format(mean))

     

    權重參數:tensor([1., 2.])
    loss_none_w:tensor([1.3133, 0.2539, 0.2539])
    loss_sum:1.8209737539291382
    loss_mean:0.36419475078582764
    loss_mean_by_hand:0.3641947731375694

    2、nn.NLLLoss

    NLLLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')

    \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} x_{n,y_n}, \quad w_{c} = \text{weight}[c] \cdot \mathbb{1}\{c \not= \text{ignore\_index}\}

    #----------------------------NLLLoss--------------------
    # flag = 0
    flag = 1
    if flag:
        weights = torch.tensor([1,1],dtype=torch.float)
        
        loss_f_none_w = nn.NLLLoss(weight=weights,reduction='none')
        loss_none_w = loss_f_none_w(inputs,target)
        
        loss_f_sum = nn.NLLLoss(weight=weights,reduction='sum')
        loss_sum = loss_f_sum(inputs,target)
        
        loss_f_mean = nn.NLLLoss(weight=weights,reduction='mean')
        loss_mean = loss_f_mean(inputs,target)
        print("NLLloss:",loss_none_w.numpy(),loss_sum.numpy(),loss_mean.numpy())
    NLL loss: [-1. -3. -3.] -7.0 -2.3333333

    3、nn.BCELoss

     BCELoss(weight=None, size_average=None, reduce=None, reduction='mean')

    \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]

    N代表batch_size

    #-----------------------------BCE Loss------------------------------
    # flag = 0
    flag = 1
    if flag:
        inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
        target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)
    
        target_bce = target
    
        # itarget
        inputs = torch.sigmoid(inputs)
    
        weights = torch.tensor([1, 1], dtype=torch.float)
    
        loss_f_none_w = nn.BCELoss(weight=weights, reduction='none')
        loss_f_sum = nn.BCELoss(weight=weights, reduction='sum')
        loss_f_mean = nn.BCELoss(weight=weights, reduction='mean')
    
        # forward
        loss_none_w = loss_f_none_w(inputs, target_bce)
        loss_sum = loss_f_sum(inputs, target_bce)
        loss_mean = loss_f_mean(inputs, target_bce)
    
        # view
        print("\nweights: ", weights)
        print("BCE Loss", loss_none_w, loss_sum, loss_mean)
        
    
    #------------------------手動驗證----------------------------------
    # flag = 0
    flag = 1
    if flag:
        idx = 0
        x_i = inputs.detach().numpy()[idx,idx]
        y_i = target.numpy()[idx,idx]
        
        #loss
        # l_i = -[ y_i * np.log(x_i) + (1-y_i) * np.log(1-y_i) ]      # np.log(0) = nan
        l_i = -y_i * np.log(x_i) if y_i else -(1-y-i)*np.log(1-x_i)
        
        print("BCE inputs:",inputs)
        print("第一個loss為:",l_i)
    weights:  tensor([1., 1.])
    BCE Loss tensor([[0.3133, 2.1269],
            [0.1269, 2.1269],
            [3.0486, 0.0181],
            [4.0181, 0.0067]]) tensor(11.7856) tensor(1.4732)
    BCE inputs: tensor([[0.7311, 0.8808],
            [0.8808, 0.8808],
            [0.9526, 0.9820],
            [0.9820, 0.9933]])
    第一個loss為: 0.31326166

    4、nn.BCEWithLogitsLoss

    BCEWithLogitsLoss(weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None)

    \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log \sigma(x_n) + (1 - y_n) \cdot \log (1 - \sigma(x_n)) \right]

    \ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}

    \ell_c(x, y) = L_c = \{l_{1,c},\dots,l_{N,c}\}^\top, \quad l_{n,c} = - w_{n,c} \left[ p_c y_{n,c} \cdot \log \sigma(x_{n,c}) + (1 - y_{n,c}) \cdot \log (1 - \sigma(x_{n,c})) \right],

    其中c表示多標簽分類數,c表示類別,p_c是肯定正確類別c的權重。

    其中N代表batch_size,reduction默認是mean

    #-------------------------------BCE with Logis Loss-----------------
    # flag = 0
    flag = 1
    if flag:
        inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
        target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)
    
        target_bce = target
    
        # inputs = torch.sigmoid(inputs)
    
        weights = torch.tensor([1, 1], dtype=torch.float)
    
        loss_f_none_w = nn.BCEWithLogitsLoss(weight=weights, reduction='none')
        loss_f_sum = nn.BCEWithLogitsLoss(weight=weights, reduction='sum')
        loss_f_mean = nn.BCEWithLogitsLoss(weight=weights, reduction='mean')
    
        # forward
        loss_none_w = loss_f_none_w(inputs, target_bce)
        loss_sum = loss_f_sum(inputs, target_bce)
        loss_mean = loss_f_mean(inputs, target_bce)
    
        # view
        print("\nweights: ", weights)
        print(loss_none_w, loss_sum, loss_mean)
    weights:  tensor([1., 1.])
    tensor([[0.3133, 2.1269],
            [0.1269, 2.1269],
            [3.0486, 0.0181],
            [4.0181, 0.0067]]) tensor(11.7856) tensor(1.4732)

    上面的pos_weight=None取默認值,通過改變pos_weight查看效果如何

    # --------------------------------- pos weight
    
    # flag = 0
    flag = 1
    if flag:
        inputs = torch.tensor([[1, 2], [2, 2], [3, 4], [4, 5]], dtype=torch.float)
        target = torch.tensor([[1, 0], [1, 0], [0, 1], [0, 1]], dtype=torch.float)
    
        target_bce = target
    
        # itarget
        # inputs = torch.sigmoid(inputs)
    
        weights = torch.tensor([1], dtype=torch.float)
        pos_w = torch.tensor([3], dtype=torch.float)        # 3
    
        loss_f_none_w = nn.BCEWithLogitsLoss(weight=weights, reduction='none', pos_weight=pos_w)
        loss_f_sum = nn.BCEWithLogitsLoss(weight=weights, reduction='sum', pos_weight=pos_w)
        loss_f_mean = nn.BCEWithLogitsLoss(weight=weights, reduction='mean', pos_weight=pos_w)
    
        # forward
        loss_none_w = loss_f_none_w(inputs, target_bce)
        loss_sum = loss_f_sum(inputs, target_bce)
        loss_mean = loss_f_mean(inputs, target_bce)
    
        # view
        # 指定第一個位置的權重為3,可以和上面沒加權重的結果進行對比
        print("\npos_weights: ", pos_w)
        print(loss_none_w, loss_sum, loss_mean)
    pos_weights:  tensor([3.])
    tensor([[0.9398, 2.1269],
            [0.3808, 2.1269],
            [3.0486, 0.0544],
            [4.0181, 0.0201]]) tensor(12.7158) tensor(1.5895)

    5、nn.L1Loss

    功能: 計算inputs與target之差的絕對值
    L1Loss(size_average=None, reduce=None, reduction='mean')
    
    主要參數:
    ? reduction :計算模式,可為none/sum/mean
    none- 逐個元素計算
    sum- 所有元素求和,返回標量
    mean- 加權平均,返回標量
     

    測量輸入:x和目標:y中每個元素之間的平均絕對誤差(MAE)

    \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left| x_n - y_n \right|

    N代表batch_size,reduction默認是mean

    \ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}

    求和運算仍然對所有元素進行運算,并除以n。如果設置了 reduction ='sum',則可以避免除以:n。

    #--------------------L1 loss-----------------------------------
    inputs = torch.ones((2,2))
    target = torch.ones((2,2))*3
    loss_f = nn.L1Loss(reduction='none')
    loss = loss_f(inputs,target)
    # loss = |input-target|
    print("input:{}\ntarget:{}\nL1loss:{}".format(inputs,target,loss))
    input:tensor([[1., 1.],
            [1., 1.]])
    target:tensor([[3., 3.],
            [3., 3.]])
    L1loss:tensor([[2., 2.],
            [2., 2.]])

    6、nn.MSELoss

    功能: 計算inputs與target之差的平方;測量輸入X中的每個元素與目標Y中的均方誤差(L2范數平方)。
    主要參數:
    ? reduction :計算模式,可為none/sum/mean
    none- 逐個元素計算
    sum- 所有元素求和,返回標量
    mean- 加權平均,返回標量

    MSELoss(size_average=None, reduce=None, reduction='mean')

    \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left( x_n - y_n \right)^2

    \ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}

    關于reduction種的參數設置和nn.L1loss是一樣的!

    #----------------------MSE loss----------------------------------
    inputs = torch.ones((2,2))
    target = torch.ones((2,2))*3
    loss_f_mse = nn.MSELoss(reduction='none')
    loss_mse = loss_f_mse(inputs,target)
    print("MSE Loss:{}".format(loss_mse))
    MSE Loss:tensor([[4., 4.],
            [4., 4.]])
    

    7、nn.SmoothL1Loss

    是對L1 Loss的一種改進;創建一個使用平方項的標準,如果絕對逐項誤差低于1,則使用L1項;否則對異常值的敏感度低于``MSELoss'',并且在某些情況下可以防止梯度爆炸;reduction默認值是均值,和上面幾種情況一樣!

    SmoothL1Loss(size_average=None, reduce=None, reduction='mean')

    \text{loss}(x, y) = \frac{1}{n} \sum_{i} z_{i}

    z_{i} =\begin{cases} 0.5 (x_i - y_i)^2, & \text{if } |x_i - y_i| < 1 \\ |x_i - y_i| - 0.5, & \text{otherwise } \end{cases}

    #--------------------------------SmoothL1Loss-------------------
    # 返回一個一維的tensor(張量),這個張量包含了從start到end(包括端點)的等距的steps個數據點。
    inputs = torch.linspace(-3,3,steps=500)
    # print(inputs)
    target = torch.zeros_like(inputs)
    # print(target)
    import matplotlib.pyplot as plt
    loss_f = nn.SmoothL1Loss(reduction='none')
    loss_smooth = loss_f(inputs,target)
    loss_l1 = np.abs(inputs.numpy())
    plt.plot(inputs.numpy(), loss_smooth.numpy(), label='Smooth L1 Loss')
    plt.plot(inputs.numpy(), loss_l1, label='L1 loss')
    plt.xlabel('x_i - y_i')
    plt.ylabel('loss value')
    plt.legend()
    plt.grid()
    plt.show()

    8、PoissonNLLLoss

    功能:泊松分布的負對數似然損失函數;

    PoissonNLLLoss(log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean')
    主要參數:
    ? log_input :輸入是否為對數形式,決定計算公式
    ? full :計算所有loss,默認為False
    ? eps :修正項,避免log(input)為nan

     

    \text{target} \sim \mathrm{Poisson}(\text{input})

    \text{loss}(\text{input}, \text{target}) = \text{input} - \text{target} * \log(\text{input}) + \log(\text{target!})

    最后一項可以省略,也可以用斯特林公式近似。 近似值用于大于1的目標值。對于小于或等于1的目標,將零添加到損耗中。

    當 log_input=True 時:

    \text{loss}(\text{input}, \text{target}) = \exp(\text{input}) - \text{target}*\text{input}

    當log_input=False時:

    \text{loss}(\text{input}, \text{target}) = \text{input} - \text{target}*\log(\text{input}+\text{eps})

    #------------------------Poisson NLL loss---------------------------------
    inputs = torch.randn((2,2))
    target = torch.randn((2,2))
    loss_f = nn.PoissonNLLLoss(log_input=True,full=False,reduction='none')
    loss = loss_f(inputs,target)
    print("input:{}\ntarget:{}\nPoisson NLL loss:{}".format(inputs, target, loss))
    
    #-------------------------手動計算驗證-------------------------------------
    idx = 0
    # 這里的計算參考公式
    loss_1 = torch.exp(inputs[idx,idx]) - target[idx,idx]*inputs[idx,idx]
    print(inputs[idx,idx])
    print("第一個元素loss:", loss_1)
    input:tensor([[-2.2698,  1.6573],
            [ 1.9074,  0.3021]])
    target:tensor([[ 0.9725, -1.1898],
            [-0.5932, -1.1603]])
    Poisson NLL loss:tensor([[2.3108, 7.2171],
            [7.8672, 1.7031]])
    tensor(-2.2698)
    第一個元素loss: tensor(2.3108)

    9、nn.KLDivLoss

    KLDivLoss(size_average=None, reduce=None, reduction='mean')

    KL散度是用于連續分布的有用距離度量,并且在對(離散采樣)連續輸出分布的空間進行直接回歸時通常很有用。

    功能:計算KLD(divergence),KL散度,相對
    注意事項:需提前將輸入計算 log-probabilities,如通過nn.logsoftmax()
    主要參數:
    ? reduction :none/sum/mean/batchmean
                          batchmean- batchsize維度求平均值
                          none- 逐個元素計算
                          sum- 所有元素求和,返回標量
                          mean- 加權平均,返回標量
    KL散度的具體描述請看:一文搞懂交叉損失
     

    l(x,y) = L = \{ l_1,\dots,l_N \}, \quad l_n = y_n \cdot \left( \log y_n - x_n \right)

    \ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';} \\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}

    #-----------------------------------KL Divergence Loss-------------------
    inputs = torch.tensor([[0.5, 0.3, 0.2], [0.2, 0.3, 0.5]])
    inputs_log = torch.log(inputs)
    target = torch.tensor([[0.9, 0.05, 0.05], [0.1, 0.7, 0.2]], dtype=torch.float)
    
    loss_f_none = nn.KLDivLoss(reduction='none')
    loss_f_mean = nn.KLDivLoss(reduction='mean')
    loss_f_bs_mean = nn.KLDivLoss(reduction='batchmean')
    
    loss_none = loss_f_none(inputs, target)
    loss_mean = loss_f_mean(inputs, target)
    loss_bs_mean = loss_f_bs_mean(inputs, target)
    
    print("loss_none:\n{}\nloss_mean:\n{}\nloss_bs_mean:\n{}".format(loss_none, loss_mean, loss_bs_mean))
    
    #----------------------------------手動計算驗證-------------------------------
    idx = 0
    # 參考計算公式
    loss_1 = target[idx, idx] * (torch.log(target[idx, idx]) - inputs[idx, idx])
    print("第一個元素loss:", loss_1)
    loss_none:
    tensor([[-0.5448, -0.1648, -0.1598],
            [-0.2503, -0.4597, -0.4219]])
    loss_mean:
    -0.3335360586643219
    loss_bs_mean:
    -1.000608205795288
    第一個元素loss: tensor(-0.5448)

    10、nn.MarginRankingLoss

    MarginRankingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')
    功能:計算兩個向量之間的相似度,用于排序任務
    特別說明:該方法計算兩組數據之間的差異,返回一個n*n 的 loss 矩陣
    主要參數:
    ? margin :邊界值,x1與x2之間的差異值
    ? reduction :計算模式,可為none/sum/mean
     

    \text{loss}(x, y) = \max(0, -y * (x1 - x2) + \text{margin})

    y = 1時, 希望x1比x2大,當x1>x2時,不產生loss
    y = -1時,希望x2比x1大,當x2>x1時,不產生loss
    #-----------------------------Margin ranking Loss------------------------
    
    x1 = torch.tensor([[1], [2], [3]], dtype=torch.float)
    x2 = torch.tensor([[2], [2], [2]], dtype=torch.float)
    
    target = torch.tensor([1, 1, -1], dtype=torch.float)
    
    loss_f_none = nn.MarginRankingLoss(margin=0, reduction='none')
    
    loss = loss_f_none(x1, x2, target)
    
    print(loss)
    tensor([[1., 1., 0.],
            [0., 0., 0.],
            [0., 0., 1.]])

    11、nn.MultiLabelMarginLoss

    MultiLabelMarginLoss(size_average=None, reduce=None, reduction='mean')
    
    optimizes a multi-class multi-classification hinge loss (margin-based loss) between input :math:`x`(a 2D mini-batch `Tensor`) and output :math:`y` ;
    

    x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}

    y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}

    0 \leq y[j] \leq \text{x.size}(0)-1

    對于所有的  i  和  j  來說;其中x和y的尺寸是相同的;

    i \neq y[j]

    #------------------------Multi label Margin loss-------------------
    x = torch.tensor([[0.1, 0.2, 0.4, 0.8]])
    y = torch.tensor([[0, 3, -1, -1]], dtype=torch.long)
    loss_f = nn.MultiLabelMarginLoss(reduction='none')
    loss = loss_f(x, y)
    print(loss.squeeze().numpy())
    
    #-------------------------手動驗證-------------------------------------
    x = x[0]
    item_1 = (1-(x[0]-x[1])) + (1-(x[0] - x[2])) # y[0]
    item_2 = (1-(x[3] - x[1])) + (1 - (x[3] - x[2]))    # y[3]
    loss_h = (item_1 + item_2) / x.shape[0]
    print(loss_h.numpy())
    0.85
    0.85

    12、nn.SoftMarginLoss

    SoftMarginLoss(size_average=None, reduce=None, reduction='mean')

    功能:計算二分類的logistic損失

    主要參數:

    • reduction:計算模式,可為none/sum/mean

    \text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()}

    上述x.nelement()統計張量x中的元素個數;

    #---------------------------nn.SoftmarginLoss-------------------
    inputs = torch.tensor([[0.3, 0.7], [0.5, 0.5]])
    target = torch.tensor([[-1, 1], [1, -1]], dtype=torch.float)
    loss_f = nn.SoftMarginLoss(reduction='none')
    loss = loss_f(inputs, target)
    print("softmarginloss:\n{}".format(loss))
    
    #---------------------------compute by hand------------------------
    idx = 1
    idx1 = 0
    inputs_i = inputs[idx,idx1]
    # print(inputs_i)
    target_i = target[idx,idx1]
    loss_hand = np.log(1+np.exp(-target_i*inputs_i))
    print(loss_hand)
    softmarginloss:
    tensor([[0.8544, 0.4032],
            [0.4741, 0.9741]])
    tensor(0.4741)

    PS:還有幾種后續添加!!!

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    版權聲明:本文為weixin_43687366原創文章,遵循 CC 4.0 BY-SA 版權協議,轉載請附上原文出處鏈接和本聲明。
    本文鏈接:https://blog.csdn.net/weixin_43687366/article/details/107927693

    智能推薦

    pytorch損失函數解析

    目錄 nn.L1Loss: nn.NLLLoss: nn.CrossEntropyLoss nn.MSELoss nn.BCELoss: nn.L1Loss: 這個比較簡單,沒有太多好說的,就是兩者做差取絕對值,然后求和取平均。 輸入x和目標y之間差的絕對值,要求 x 和 y 的維度要一樣(可以是向量或者矩陣),得到的 loss 維度也是對應一樣的。 loss(x,y)=1/n ∑ \su...

    jersey訪問路徑是如何匹配并訪問的(一)

    上周同事遇到了一個問題,就是明明路徑存在,但是卻報出404未找到路徑,但是不是所有的路徑都404,網上查了很多關于jersey的資料,好像也沒說全,這次就自己查看一下源碼,看看它到底是怎么去匹配的。 舉一個例子: http:127.0.0.1:8080/dsj/dsjql/v1/UnittypeCount/getList能訪問成功; http:127.0.0.1:8080/dsj/dsjql/v1...

    MINIO分布式集群搭建

    搭建分布式集群 使用docker-compose 中文文檔:https://docs.min.io/cn/deploy-minio-on-docker-compose.html Docker Compose允許定義和運行單主機,多容器Docker應用程序。 使用Compose,您可以使用Compose文件來配置MinIO服務。 然后,使用單個命令,您可以通過你的配置創建并啟動所有分布式MinIO實...

    LINUX設備驅動模型分析之五 總線-設備-驅動模塊總結

    前面幾篇文章我們對bus-device-driver模型進行了單獨介紹,本篇文章我們對這三部分進行總結,同時對之前文章中未細化的部分進行詳細說明。 bus-device-driver相關結構體關聯 如下圖是包含bus-device-driver的關聯圖,我們根據該流程圖再次進行一下說明。   1.devices_kset集合說明 內核系統中在device模塊的初始化接口中,創建了一個ks...

    python 爬蟲實踐 (爬取鏈家成交房源信息和價格)

    簡單介紹 pi: 簡單介紹下,我們需要用到的技術,python 版本是用的pyhon3,系統環境是linux,開發工具是vscode;工具包:request 爬取頁面數據,然后redis 實現數據緩存,lxml 實現頁面數據的分析,提取我們想要的數據,然后多線程和多進程提升爬取速度,最后,通過celery 框架實現分布式爬取,并實際部署下,下面就按這個邏輯順序,進行介紹 request爬取頁面數據...

    猜你喜歡

    從NIO編程到Netty的使用

    我們在網絡編程——NIO編程中,就曾介紹過直接使用NIO進行編程,這里我們介紹下如何使用Netty框架,來完成我們之前直接使用NIO完成的功能,就是一個簡單的客戶端和服務端的通信。 在這之前,我們先來簡單了解一下Netty框架的核心組件: Channel Channel 是Java NIO 的一個基本構造。它代表一個到實體(如一個硬件設備、一個文件、一個網絡套接字或者一個能...

    小魚的Pytorch撞墻到撞墻墻到撞墻墻墻的精通之路二:自動微分

    自動微分篇 autograd requires_grad && grad_fn tensor.backward && tesnor.grad 總結&&參考文獻 基于官方教程,記載小魚的個人理解,記錄些許項目,以及不斷遇到的離奇的bug及殺蟲方法。 autograd autograd是pytorch之中的一個核心計算,提供了自動計算微分、跟蹤微分過程、...

    spring cloud + nacos入門案列

    一:簡介    Spring Cloud是一系列框架的有序集合。它利用Spring Boot的開發便利性巧妙地簡化了分布式系統基礎設施的開發,如服務發現注冊、配置中心、消息總線、負載均衡、斷路器、數據監控等,都可以用Spring Boot的開發風格做到一鍵啟動和部署。Spring Cloud并沒有重復制造輪子,它只是將各家公司開發的比較成熟、經得起實際考驗的服務框架組合起來,通...

    布局篇之圣杯布局

    圣杯布局的想法就是:外層盒子有中的子盒子都浮動起來,然后先把main固定住,利用margin留出一定空間,再將其他盒子利用marign:-xx%,以及position:relative(可能會用到)進行位置調整。 這里先說一下兩列自適應的。 就是上面這個效果。綠色的是main,紅色的是aside。這里因為只用到了.main:”margin-left”屬性給asdie留出左側...

    在安裝Tensorflow和PyTorch時遇到的錯誤總結

    博主在安裝Tensorflow以及PyTorch時遇到很多報錯,花費很多時間解決,目前進行一個總結并持續更新: 報錯一:pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host=‘files.pythonhosted.org’, port=443): Read timed out. 網...

    精品国产乱码久久久久久蜜桃不卡