攝影網(wǎng)站建設(shè)內(nèi)容網(wǎng)站運(yùn)營(yíng)一個(gè)月多少錢
編碼我們的第一個(gè)神經(jīng)元
- 引言
- 1. A Single Neuron:
- Example 1
- Example 2
- 2. A Layer of Neurons:
- Example 1
引言
本教程專為那些對(duì)神經(jīng)網(wǎng)絡(luò)已有基礎(chǔ)了解、但尚未動(dòng)手實(shí)踐過的讀者而設(shè)計(jì)。盡管網(wǎng)上充斥著各種教程,但很多內(nèi)容要么過于簡(jiǎn)略,要么直接進(jìn)入高級(jí)主題,讓初學(xué)者難以跟上。本指南將帶領(lǐng)你從零開始,用 Python 構(gòu)建一個(gè)簡(jiǎn)單的神經(jīng)網(wǎng)絡(luò)模型,逐步拆解每一步,幫助你真正理解神經(jīng)網(wǎng)絡(luò)的工作原理,為今后的深入學(xué)習(xí)打下堅(jiān)實(shí)基礎(chǔ)。
1. A Single Neuron:
最簡(jiǎn)單基礎(chǔ)的單個(gè)神經(jīng)元:
Example 1
inputs = [1, 2, 3]
weights = [0.2, 0.8, -0.5]
bias = 2
output = (inputs[0]*weights[0] + inputs[1]*weights[1] + inputs[2]*weights[2] + bias)
print("output:", output)
>>>
output: 2.3
代碼的可視化:https://nnfs.io/bkr/
Example 2
inputs = [1.0, 2.0, 3.0, 2.5]
weights = [0.2, 0.8, -0.5, 1.0]
bias = 2.0
output = (inputs[0]*weights[0] +inputs[1]*weights[1] +inputs[2]*weights[2] +inputs[3]*weights[3] + bias)
print(output)
>>>
output: 2.3
output: 4.8
代碼的可視化:https://nnfs.io/djp/
2. A Layer of Neurons:
一層神經(jīng)元:
Example 1
假設(shè)我們有這樣一個(gè)場(chǎng)景:一層有 3 個(gè)神經(jīng)元,4 個(gè)輸入。
inputs = [1, 2, 3, 2.5]weights1 = [0.2, 0.8, -0.5, 1]
weights2 = [0.5, -0.91, 0.26, -0.5]
weights3 = [-0.26, -0.27, 0.17, 0.87]bias1 = 2
bias2 = 3
bias3 = 0.5outputs = [# Neuron 1:inputs[0]*weights1[0] +inputs[1]*weights1[1] +inputs[2]*weights1[2] +inputs[3]*weights1[3] + bias1,# Neuron 2:inputs[0]*weights2[0] +inputs[1]*weights2[1] +inputs[2]*weights2[2] +inputs[3]*weights2[3] + bias2,# Neuron 3:inputs[0]*weights3[0] +inputs[1]*weights3[1] +inputs[2]*weights3[2] +inputs[3]*weights3[3] + bias3]print("outputs:", outputs)
>>>
outputs: [4.8, 1.21, 2.385]
代碼的可視化:https://nnfs.io/mxo/
在這段代碼中,我們有三組權(quán)重和三個(gè)偏差,它們定義了三個(gè)神經(jīng)元。每個(gè)神經(jīng)元都“連接”到相同的輸入。不同之處在于每個(gè)神經(jīng)元對(duì)輸入應(yīng)用的權(quán)重和偏差是分開的。這稱為全連接神經(jīng)網(wǎng)絡(luò)——當(dāng)前層的每個(gè)神經(jīng)元都與前一層的每個(gè)神經(jīng)元相連。這是一種非常常見的神經(jīng)網(wǎng)絡(luò)類型,但應(yīng)該注意,并非一定要像這樣完全連接。到目前為止,我們只展示了一個(gè)包含很少神經(jīng)元的單層的代碼。想象一下編碼更多層和更多神經(jīng)元,這將變得非常具有挑戰(zhàn)性。相對(duì)于使用我們當(dāng)前的方法,我們可以使用循環(huán)來擴(kuò)展并動(dòng)態(tài)處理輸入和層的大小。我們已將分開的權(quán)重變量轉(zhuǎn)換為一個(gè)權(quán)重列表,這樣我們可以遍歷它們,并且我們改變了代碼使用循環(huán)而不是硬編碼的操作。
inputs = [1, 2, 3, 2.5]
weights = [[0.2, 0.8, -0.5, 1],[0.5, -0.91, 0.26, -0.5],[-0.26, -0.27, 0.17, 0.87]]
biases = [2, 3, 0.5]# Output of current layer
layer_outputs = []
# For each neuron
for neuron_weights, neuron_bias in zip(weights, biases):# Zeroed output of given neuronneuron_output = 0# For each input and weight to the neuronfor n_input, weight in zip(inputs, neuron_weights):# Multiply this input by associated weight# and add to the neuron’s output variableneuron_output = neuron_output + n_input*weight# Add biasneuron_output = neuron_output + n_input*weight# Put neuron’s result to the layer’s output listlayer_outputs.append(neuron_output)print("layer_outputs:", layer_outputs)print(list(zip(weights, biases)))
print(list(zip(weights, biases))[0])
print(type(list(zip(weights, biases))[0]))
>>>
layer_outputs: [4.8, 1.21, 2.385][([0.2, 0.8, -0.5, 1], 2), ([0.5, -0.91, 0.26, -0.5], 3), ([-0.26, -0.27, 0.17, 0.87], 0.5)]
([0.2, 0.8, -0.5, 1], 2)
<class 'tuple'>
這做的和之前一樣,只是以一種更動(dòng)態(tài)和可擴(kuò)展的方式。如果你在某個(gè)步驟感到困惑,可以打印print()
出對(duì)象來看看它們是什么以及發(fā)生了什么。zip()
函數(shù)讓我們能夠同時(shí)迭代多個(gè)可迭代對(duì)象(在這種情況下是列表)。再次說明,我們所做的就是,對(duì)每個(gè)神經(jīng)元(上述代碼中的外層循環(huán),遍歷神經(jīng)元的權(quán)重和偏差),取每個(gè)輸入值與該輸入相關(guān)聯(lián)的權(quán)重相乘(上述代碼中的內(nèi)層循環(huán),遍歷輸入和權(quán)重),將所有這些相加,然后在最后加上一個(gè)偏差。最后,將神經(jīng)元的輸出發(fā)送到層的輸出列表中。