tensorflow - Evaluating a tensor partially -


suppose have convolution operation such this:

y = tf.nn.conv2d( ... ) 

tensorflow allows evaluate part of tensor, e.g.:

print(sess.run(y[0])) 

when evaluate tensor partially above, 1 of following correct?

  1. tf runs whole operation, i.e. computes y completely, , returns value of y[0]
  2. tf runs necessary operations compute y[0].

i set small sample program:

import os os.environ['cuda_visible_devices'] = '-1' # forcing run on cpu import tensorflow tf  def full_array(sess, arr):   sess.run(arr)[0]  def partial_array(sess, arr):   sess.run(arr[0])  sess = tf.session() arr = tf.random_uniform([100]) arr = arr + tf.random_uniform([100]) 

these results:

%timeit partial_array(sess, arr) 100 loops, best of 3: 15.8 ms per loop  %timeit full_array(sess, arr) 10000 loops, best of 3: 85.9 µs per loop 

it seems timings partial run slower full run (which confusing me honest...)

with these timings, i'd exclude alternative 1), since expect timing approximately same in 2 functions if true.

given simplified test code, i'd lean towards idea logic figure out part of graph needs run satisfy tensor slice cause of performance difference, don't have proof that.

update:

i ran similar test convolution op instead of add (which thought might excessively simple example):

def full_array(sess, arr):   return sess.run(arr)[0]  def partial_array(sess, arr):   return sess.run(arr[0])  sess = tf.session() arr = tf.random_uniform([1,100,100,3]) conv = tf.nn.conv2d(arr, tf.constant(1/9, shape=[3,3,3,6]), [1,1,1,1], 'same') 

the results, however, consistent previous ones:

%timeit full_array(sess, conv) 1000 loops, best of 3: 949 µs per loop  %timeit partial_array(sess, conv) 100 loops, best of 3: 20 ms per loop 

Comments

Popular posts from this blog

angular - Ionic slides - dynamically add slides before and after -

minify - Minimizing css files -

Add a dynamic header in angular 2 http provider -