V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
V2EX  ›  happydezhangning  ›  全部回复第 8 页 / 共 12 页
回复总数  226
1  2  3  4  5  6  7  8  9  10 ... 12  
2019-04-08 14:03:26 +08:00
回复了 happydezhangning 创建的主题 问与答 如何配置一台 GPU 服务器?
@ivmm 租用云没法报销,老板(导师)不想自己掏钱。。。
2019-04-08 13:59:27 +08:00
回复了 happydezhangning 创建的主题 问与答 如何配置一台 GPU 服务器?
救救孩子吧
2019-04-02 15:22:54 +08:00
回复了 0x11901 创建的主题 问与答 如何活出生命的意义?
基因竞争的载体罢了,哪来什么意义。
2019-03-24 09:38:53 +08:00
回复了 oldarm 创建的主题 北京 大家理髮都怎麼搞定的
@littlejohnny 为什么你能发语音
2019-03-23 09:20:36 +08:00
回复了 FunnyCodingXu 创建的主题 游戏 各位会沉迷游戏么?为什么呢?
性和暴力是刻在我们基因里的
2019-03-04 08:45:50 +08:00
回复了 evend 创建的主题 问与答 怎样学习数学
可以看看张宇的考研数学辅导视频,轻松幽默
2019-03-03 11:27:31 +08:00
回复了 Raphael96 创建的主题 职场话题 如果不做开发,你会换个什么职业?
渔民,出海捕鱼,赶海。
2019-03-02 16:00:52 +08:00
回复了 kblacksheep 创建的主题 问与答 除了优衣库 百元以内大家有啥 T 恤推荐没
GU
2019-02-25 17:55:44 +08:00
回复了 tanranran 创建的主题 问与答 迫于追女友,有没有这种零基础,手工编织零钱包的教程
@fzdfengzi 笑死我了
2019-02-24 19:00:44 +08:00
回复了 admol 创建的主题 推广 第一次卖自家种的脐橙,也抽奖送一些吧
支持一波
2019-02-23 13:30:06 +08:00
回复了 happydezhangning 创建的主题 问与答 关于神经网络训练不收敛的问题
2019-02-23 13:09:32 +08:00
回复了 happydezhangning 创建的主题 问与答 关于神经网络训练不收敛的问题
@baiye23333 模型要求
1.2 Network Architecture
Implement a neural network with layers in the following order:
Input Image size 32 × 32 × 3.
Convolutional layer Apply 16 flters with size 3 × 3 × 3, stride 1 and padding 0.
Layer input 32 × 32 × 3, output 30 × 30 × 16.
ReLU layer Apply ReLU activation function on each component.
Layer input 30 × 30 × 16, output 30 × 30 × 16.
Pooling layer Max-pooling with size 2 × 2 and stride 2.
Layer input 30 × 30 × 16, output 15 × 15 × 16.
Fully-connected layer Reshape the data to a vector with length 3600 and fully connect to 10 output nodes.
Layer input 15 × 15 × 16 = 3600, output size 10.
Softmax layer Apply softmax function to get the fnal output, indicating the probability in each category.
Layer input size 10, output size 10.
Here you should calculate the forward and backward propagation by yourself.
2019-02-23 13:07:41 +08:00
回复了 happydezhangning 创建的主题 问与答 关于神经网络训练不收敛的问题
@baiye23333
import math
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
np.set_printoptions(threshold=np.inf)
#open file
def unpickle(file):
import pickle
with open(file, 'rb') as fo:
dict = pickle.load(fo,encoding='bytes')
return dict

dict1 = unpickle('cifar-10-batches-py/data_batch_1')
dict2 = unpickle('cifar-10-batches-py/data_batch_2')
dict3 = unpickle('cifar-10-batches-py/data_batch_3')
dict4 = unpickle('cifar-10-batches-py/data_batch_4')
dict5 = unpickle('cifar-10-batches-py/data_batch_5')
test_dict = unpickle("cifar-10-batches-py/test_batch")

data1_4d = np.reshape(dict1[b'data'],(10000, 32, 32, 3), order = 'F')
data1_4d = np.rot90(data1_4d, k = 3, axes = (1,2))
data2_4d = np.reshape(dict2[b'data'],(10000, 32, 32, 3), order = 'F')
data2_4d = np.rot90(data2_4d, k = 3, axes = (1,2))
data3_4d = np.reshape(dict3[b'data'], (10000, 32, 32, 3), order = 'F')
data3_4d = np.rot90(data3_4d, k = 3, axes = (1,2))
data4_4d = np.reshape(dict4[b'data'], (10000, 32, 32, 3), order = 'F')
data4_4d = np.rot90(data4_4d, k = 3, axes = (1,2))
data5_4d = np.reshape(dict5[b'data'], (10000, 32, 32, 3), order = 'F')
data5_4d = np.rot90(data5_4d, k = 3, axes = (1,2))
test_data = np.reshape(test_dict[b'data'], (10000, 32, 32, 3), order = 'F')
test_data = np.rot90(test_data, k = 3, axes = (1,2))

label1 = dict1[b'labels']
label2 = dict2[b'labels']
label3 = dict3[b'labels']
label4 = dict4[b'labels']
label5 = dict5[b'labels']
test_label = test_dict[b'labels']

def softmax(x):
#减去最大值
x-=np.max(x)
x = np.exp(x)/np.sum(np.exp(x))
return x
#权值参数初始化
weight = np.random.normal(loc = 0,scale = 0.01,size = (3,3,3,16))
bias = np.zeros([16],dtype = np.float64)
conv_out = np.zeros([30,30,16],dtype = np.float64)
Maxpool_out = np.zeros([15,15,16],dtype = np.float64)
weight_of_fc = np.random.uniform(0,0.1,size = (3600,10))
fc_in = np.zeros([1,3600],dtype = np.float64)
softmax_out = np.zeros([1,10],dtype = np.float64)
Relu_out = np.zeros([30,30,16],dtype = np.float64)
dl_div_weight = np.zeros([3,3,3,16],dtype = np.float64)
dl_div_bias = np.zeros([16],dtype = np.float64)


def fc_forward(in_pic):
global conv_out, weight, Maxpool_out, bias, Relu_out ,softmax_out
global weight_of_fc, fc_in, dl_div_weight,dl_div_bias
#卷积操作,Convolutional layer Apply 16 flters with size 3 × 3 × 3,
#stride 1 and padding 0,Layer input 32 × 32 × 3, output 30 × 30 × 16.
for i in range (16):
for j in range(30):
for k in range(30):
conv_out[j][k][i] = (in_pic[j:j+3,k:k+3,0] * weight[:,:,0,i]).sum()+ \
(in_pic[j:j+3,k:k+3,1] * weight[:,:,1,i]).sum()+ \
(in_pic[j:j+3,k:k+3,2] * weight[:,:,2,i]).sum()
conv_out += bias
Relu_out = np.choose(conv_out < 0 ,(conv_out,0))#激活函数
for i in range(16):#池化层
for j in range(15):
for k in range(15):
Maxpool_out[j][k][i] = np.max(Relu_out[j*2:j*2+2,k*2:k*2+2,i])
fc_in = np.reshape(Maxpool_out,(1,3600))
fc_out = np.dot(fc_in,weight_of_fc)
softmax_out = softmax(fc_out)
return (np.argmax(fc_out))
#损失函数,交叉熵
#loss =y*np.logp(标签索引对应的)
def back_forward(inputs,label):#优化卷积层和池化层的参数
global conv_out, weight, Maxpool_out, bias, Relu_out ,softmax_out
global weight_of_fc, fc_in, dl_div_weight,dl_div_bias
for index,input_picture in enumerate(inputs):
num_predict = fc_forward(input_picture)
print("softmax_out : ", softmax_out)
print("预测结果: ",num_predict,"真实值: ",label[index])
#loss 对全连接层输出的偏导 p-y,此时 softmax_out 为 dl_div_dz,z 为全连接最后输出
softmax_out[0][label[index]] -= 1

dw_fc = np.dot(np.transpose(fc_in),softmax_out)
#将 fc_in 转为 3600*1,softmax_out 为 1*10,dw_fc 为 3600*10
dl_div_dfc3600 = np.dot(softmax_out,np.transpose(weight_of_fc))
#weight_of_fc 为 3600*10,dl/dz=softmax_out 为 1*10,dl_div_dfc3600:1*3600·
dl_div_dMaxpool_out = np.reshape(dl_div_dfc3600,(15,15,16))

#求对激活层输出(池化层输入)的偏导:
dl_div_dRelu_out = np.zeros([30,30,16],dtype = np.float64)
for i in range(16):
for j in range(15):
for k in range(15):
if Maxpool_out[j][k][i] == Relu_out[j*2][k*2][i]:
dl_div_dRelu_out[j*2][k*2][i] = dl_div_dMaxpool_out[j][k][i]
elif Maxpool_out[j][k][i] == Relu_out[j*2+1][k*2][i]:
dl_div_dRelu_out[j*2+1][k*2][i] = dl_div_dMaxpool_out[j][k][i]
elif Maxpool_out[j][k][i] == Relu_out[j*2][k*2+1][i]:
dl_div_dRelu_out[j*2][k*2+1][i] = dl_div_dMaxpool_out[j][k][i]
else:
dl_div_dRelu_out[j*2+1][k*2+1][i] = dl_div_dMaxpool_out[j][k][i]
#loss 对 relu(input)即 conv_out 的偏导
dReluout_div_convout = np.choose(conv_out >= 0,(0,1))#reluout 对 reluin 的偏导
dl_div_convout = dReluout_div_convout * dl_div_dRelu_out #30*30*16

#loss 对卷积层 w 和 bias 的偏导
for i in range(16):
for j in range(3):
for k in range(3):
for m in range(3):
dl_div_weight[k,m,j,i] = \
(input_picture[k:k+30,m:m+30,j] * dl_div_convout[:,:,i]).sum()
dl_div_bias[i] = dl_div_convout[:,:,i].sum()
weight_of_fc =weight_of_fc - 0.001 * dw_fc
weight = weight - 0.001 * dl_div_weight
bias =bias - 0.001 * dl_div_bias
def train():
back_forward(data1_4d,label1)
back_forward(data2_4d,label2)
back_forward(data3_4d,label3)
back_forward(data4_4d,label4)
back_forward(data5_4d,label5)
train()
奶茶店?
2019-01-20 17:01:07 +08:00
回复了 deadEgg 创建的主题 问与答 猫跑了,我的人生完了
说不定会带个对象回来
2019-01-17 08:56:19 +08:00
回复了 zhoulouzi 创建的主题 问与答 公司年会要举办王者荣耀比赛,求个队名和比较骚的介绍.
你们去团我偷塔
2019-01-17 08:54:33 +08:00
回复了 joe0 创建的主题 问与答 明天约了一个刚认识的妹子吃饭,怎么聊求指点!
最近 V 站超纲题越来越多了
2019-01-15 19:28:56 +08:00
回复了 Mike 创建的主题 北京 年关将近,你焦虑吗?
不管怎样,生活还要继续。
1  2  3  4  5  6  7  8  9  10 ... 12  
关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   我们的愿景   ·   实用小工具   ·   5410 人在线   最高记录 6543   ·     Select Language
创意工作者们的社区
World is powered by solitude
VERSION: 3.9.8.5 · 45ms · UTC 07:13 · PVG 15:13 · LAX 00:13 · JFK 03:13
Developed with CodeLauncher
♥ Do have faith in what you're doing.