deep learning - Are shared layers handled efficiently? -


are shared layers handled efficiently in cntk? (i.e., activation computation not duplicated)

fro example suppose have following expression:

def create_func(shared_layers, out1_layers, out2_layers):      # ... there block specifying activations...omitted brevity     shared_hl_func = for(shared_layers, lambda n: dense(n), name="shared hidden layers")     out1_hl_func = for(out1_layers, lambda n: dense(n), name="out1 hidden layers")     out2_hl_func = for(out2_layers, lambda n: dense(n), name="sigma hidden layers")      output1_func = sequential([shared_hl_func, out1_hl_func,                                   dense(1, activation=none, init=init, name="out1_regression_layer")], name="out1")     output2_func = sequential([shared_hl_func, out2_hl_func,                                 dense(1, activation=none, init=init, name="out2_regression_layer")], name="out2")     return output1_func, output2_func  output1, output2 = create_func([50,25], [25, 10], [25, 10]) my_input = cntk.input_variable((70,)) dual_model = cntk.combine(output1(my_input), output2(my_input)) 

when evaluating dual_model computation done efficiently? (i.e., first 2 wider dense layers computed once , shared? if not case, constructing through explicit function composition efficiency?

in code above shared_hl_func evaludated independently in output1_func , output2_func, though parameter shared. check computation graph, please use plot visualize it.

to achieve computation sharing, need pass shared_hl_func output variable output1_func , output2:

import cntk cntk.layers import * def create_func(shared_layers, out1_layers, out2_layers):     shared_hl_func = for(shared_layers, lambda n: dense(n), name="shared hidden layers")     out1_hl_func = for(out1_layers, lambda n: dense(n), name="out1 hidden layers")     out2_hl_func = for(out2_layers, lambda n: dense(n), name="sigma hidden layers")     out1_regr_func = dense(1, activation=none, name="out1_regression_layer")     out2_regr_func = dense(1, activation=none, name="out2_regression_layer")     @cntk.function     def _func(x):         # ... there block specifying activations...omitted brevity         shared_hl = shared_hl_func(x)         output1 = sequential([out1_hl_func, out1_regr_func], name="out1")(shared_hl)         output2 = sequential([out2_hl_func, out2_regr_func], name="out2")(shared_hl)         return cntk.combine(output1, output2)     return _func  output = create_func([50,25], [25, 10], [25, 10]) my_input = cntk.input_variable((70,)) dual_model = output(my_input) # use plot visualize model cntk.logging.graph.plot(dual_model, 'dual.pdf') 

Comments

Popular posts from this blog

angular - Ionic slides - dynamically add slides before and after -

minify - Minimizing css files -

Add a dynamic header in angular 2 http provider -