中文字幕理论片,69视频免费在线观看,亚洲成人app,国产1级毛片,刘涛最大尺度戏视频,欧美亚洲美女视频,2021韩国美女仙女屋vip视频

打開APP
userphoto
未登錄

開通VIP,暢享免費電子書等14項超值服

開通VIP
學(xué)習(xí)Tensorflow,反卷積

  在深度學(xué)習(xí)網(wǎng)絡(luò)結(jié)構(gòu)中,各個層的類別可以分為這幾種:卷積層,全連接層,relu層,pool層和反卷積層等。目前,在像素級估計和端對端學(xué)習(xí)問題中,全卷積網(wǎng)絡(luò)展現(xiàn)了他的優(yōu)勢,里面有個很重要的層,將卷積后的feature map上采樣(反卷積)到輸入圖像的尺寸空間,就是反卷積層。那么它在tensorflow里是怎么實現(xiàn)的呢?本篇博文講介紹這方面的內(nèi)容。

1. 反卷積函數(shù)介紹

tf.nn.conv2d_transpose(value, filter, output_shape, strides, padding='SAME', name=None)
這是tensorflow里實現(xiàn)反卷積的函數(shù),value是上一層的feature map,filter是卷積核[kernel_size, kernel_size, output_channel, input_channel ],output_shape定義輸出的尺寸[batch_size, height, width, channel],padding是邊界打補丁的算法。

這里需要特別說明的是,output_shape和strides里的參數(shù)是相互耦合的,我們可以根據(jù)輸入和輸出確定strides參數(shù)(正整數(shù)),也可以根據(jù)輸入和strides確定輸出尺寸。

2. Alex net加反卷積層

# Copyright 2015 The TensorFlow Authors. All Rights Reserved.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# =============================================================================="""Timing benchmark for AlexNet inference.To run, use:  bazel run -c opt --config=cuda       third_party/tensorflow/models/image/alexnet:alexnet_benchmarkAcross 100 steps on batch size = 128.Forward pass:Run on Tesla K40c: 145 +/- 1.5 ms / batchRun on Titan X:     70 +/- 0.1 ms / batchForward-backward pass:Run on Tesla K40c: 480 +/- 48 ms / batchRun on Titan X:    244 +/- 30 ms / batch"""from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_functionfrom datetime import datetimeimport mathimport timefrom six.moves import xrange  # pylint: disable=redefined-builtinimport tensorflow as tfFLAGS = tf.app.flags.FLAGStf.app.flags.DEFINE_integer('batch_size', 1,                            """Batch size.""")tf.app.flags.DEFINE_integer('num_batches', 100,                            """Number of batches to run.""")tf.app.flags.DEFINE_integer('image_width', 345,                            """image width.""")tf.app.flags.DEFINE_integer('image_height', 460,                            """image height.""")def print_activations(t):  print(t.op.name, ' ', t.get_shape().as_list())def inference(images):  """Build the AlexNet model.  Args:    images: Images Tensor  Returns:    pool5: the last Tensor in the convolutional component of AlexNet.    parameters: a list of Tensors corresponding to the weights and biases of the        AlexNet model.  """  parameters = []  # conv1  with tf.name_scope('conv1') as scope:    kernel = tf.Variable(tf.truncated_normal([11, 11, 3, 64], dtype=tf.float32,                                             stddev=1e-1), name='weights')    conv = tf.nn.conv2d(images, kernel, [1, 4, 4, 1], padding='SAME')    biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),                         trainable=True, name='biases')    bias = tf.nn.bias_add(conv, biases)    conv1 = tf.nn.relu(bias, name=scope)    print_activations(conv1)    parameters += [kernel, biases]  # lrn1  # TODO(shlens, jiayq): Add a GPU version of local response normalization.  # pool1  pool1 = tf.nn.max_pool(conv1,                         ksize=[1, 3, 3, 1],                         strides=[1, 2, 2, 1],                         padding='VALID',                         name='pool1')  print_activations(pool1)  # conv2  with tf.name_scope('conv2') as scope:    kernel = tf.Variable(tf.truncated_normal([5, 5, 64, 192], dtype=tf.float32,                                             stddev=1e-1), name='weights')    conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME')    biases = tf.Variable(tf.constant(0.0, shape=[192], dtype=tf.float32),                         trainable=True, name='biases')    bias = tf.nn.bias_add(conv, biases)    conv2 = tf.nn.relu(bias, name=scope)    parameters += [kernel, biases]  print_activations(conv2)  # pool2  pool2 = tf.nn.max_pool(conv2,                         ksize=[1, 3, 3, 1],                         strides=[1, 2, 2, 1],                         padding='VALID',                         name='pool2')  print_activations(pool2)  # conv3  with tf.name_scope('conv3') as scope:    kernel = tf.Variable(tf.truncated_normal([3, 3, 192, 384],                                             dtype=tf.float32,                                             stddev=1e-1), name='weights')    conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME')    biases = tf.Variable(tf.constant(0.0, shape=[384], dtype=tf.float32),                         trainable=True, name='biases')    bias = tf.nn.bias_add(conv, biases)    conv3 = tf.nn.relu(bias, name=scope)    parameters += [kernel, biases]    print_activations(conv3)  # conv4  with tf.name_scope('conv4') as scope:    kernel = tf.Variable(tf.truncated_normal([3, 3, 384, 256],                                             dtype=tf.float32,                                             stddev=1e-1), name='weights')    conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')    biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),                         trainable=True, name='biases')    bias = tf.nn.bias_add(conv, biases)    conv4 = tf.nn.relu(bias, name=scope)    parameters += [kernel, biases]    print_activations(conv4)  # conv5  with tf.name_scope('conv5') as scope:    kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256],                                             dtype=tf.float32,                                             stddev=1e-1), name='weights')    conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')    biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),                         trainable=True, name='biases')    bias = tf.nn.bias_add(conv, biases)    conv5 = tf.nn.relu(bias, name=scope)    parameters += [kernel, biases]    print_activations(conv5)  # pool5  pool5 = tf.nn.max_pool(conv5,                         ksize=[1, 3, 3, 1],                         strides=[1, 2, 2, 1],                         padding='VALID',                         name='pool5')  print_activations(pool5)  # conv6  with tf.name_scope('conv6') as scope:    kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 1],                                             dtype=tf.float32,                                             stddev=1e-1), name='weights')    conv = tf.nn.conv2d(pool5, kernel, [1, 1, 1, 1], padding='SAME')    biases = tf.Variable(tf.constant(0.0, shape=[1], dtype=tf.float32),                         trainable=True, name='biases')    bias = tf.nn.bias_add(conv, biases)    conv6 = tf.nn.relu(bias, name=scope)    parameters += [kernel, biases]    print_activations(conv6)  # deconv1  with tf.name_scope('deconv1') as scope:  	wt = tf.Variable(tf.truncated_normal([11, 11, 1, 1]))	deconv1 = tf.nn.conv2d_transpose(conv6, wt, [FLAGS.batch_size, 130, 100, 1], [1, 10, 10, 1], 'SAME')	print_activations(deconv1)  # deconv2  with tf.name_scope('deconv2') as scope:  	wt = tf.Variable(tf.truncated_normal([11, 11, 1, 1]))	deconv2 = tf.nn.conv2d_transpose(deconv1, wt, [FLAGS.batch_size, 260, 200, 1], [1, 2, 2, 1], 'SAME')	print_activations(deconv2)  return deconv2, parametersdef time_tensorflow_run(session, target, info_string):  """Run the computation to obtain the target tensor and print timing stats.  Args:    session: the TensorFlow session to run the computation under.    target: the target Tensor that is passed to the session's run() function.    info_string: a string summarizing this run, to be printed with the stats.  Returns:    None  """  num_steps_burn_in = 10  total_duration = 0.0  total_duration_squared = 0.0  for i in xrange(FLAGS.num_batches + num_steps_burn_in):    start_time = time.time()    _ = session.run(target)    duration = time.time() - start_time    if i > num_steps_burn_in:      if not i % 10:        print ('%s: step %d, duration = %.3f' %               (datetime.now(), i - num_steps_burn_in, duration))      total_duration += duration      total_duration_squared += duration * duration  mn = total_duration / FLAGS.num_batches  vr = total_duration_squared / FLAGS.num_batches - mn * mn  sd = math.sqrt(vr)  print ('%s: %s across %d steps, %.3f +/- %.3f sec / batch' %         (datetime.now(), info_string, FLAGS.num_batches, mn, sd))def run_benchmark():  """Run the benchmark on AlexNet."""  with tf.Graph().as_default():    # Generate some dummy images.    # Note that our padding definition is slightly different the cuda-convnet.    # In order to force the model to start with the same activations sizes,    # we add 3 to the image_size and employ VALID padding above.    images = tf.Variable(tf.random_normal([FLAGS.batch_size,                                           460,                                           345, 3],                                          dtype=tf.float32,                                          stddev=1e-1))    # Build a Graph that computes the logits predictions from the    # inference model.    pool5, parameters = inference(images)    # Build an initialization operation.    init = tf.initialize_all_variables()    # Start running operations on the Graph.    config = tf.ConfigProto()    config.gpu_options.allocator_type = 'BFC'    sess = tf.Session(config=config)    sess.run(init)    # Run the forward benchmark.    time_tensorflow_run(sess, pool5, "Forward")    # Add a simple objective so we can calculate the backward pass.    objective = tf.nn.l2_loss(pool5)    # Compute the gradient with respect to all the parameters.    grad = tf.gradients(objective, parameters)    # Run the backward benchmark.    time_tensorflow_run(sess, grad, "Forward-backward")def main(_):  run_benchmark()if __name__ == '__main__':  tf.app.run()

三. 運行結(jié)果



reference url:

https://www.tensorflow.org/versions/r0.9/api_docs/python/nn.html#convolution

http://cvlab.postech.ac.kr/research/deconvnet/

本站僅提供存儲服務(wù),所有內(nèi)容均由用戶發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點擊舉報
打開APP,閱讀全文并永久保存 查看更多類似文章
猜你喜歡
類似文章
貓狗大戰(zhàn)分類TensorFlow實戰(zhàn)分享
不怕學(xué)不會 使用TensorFlow從零開始構(gòu)建卷積神經(jīng)網(wǎng)絡(luò)
AlexNet網(wǎng)絡(luò)的結(jié)構(gòu)詳解與實現(xiàn)
tensorflow 20 13-10 iter:0 ,test_acc:0.098 iter:10 ,test_acc:0.098 iter:20 ,test_acc:0.098
tensorflow學(xué)習(xí)筆記(5)卷積神經(jīng)網(wǎng)絡(luò)(CNN)
【tensorflow速成】Tensorflow圖像分類從模型自定義到測試
更多類似文章 >>
生活服務(wù)
熱點新聞
分享 收藏 導(dǎo)長圖 關(guān)注 下載文章
綁定賬號成功
后續(xù)可登錄賬號暢享VIP特權(quán)!
如果VIP功能使用有故障,
可點擊這里聯(lián)系客服!

聯(lián)系客服