python - ValueError: Shapes (69, 44, 2) and (69, 2, 2) are incompatible -
using tensorflow version 1.3.0 in python 3.5.2. i'm trying build convolution neural network takes in data 69 rows , 44 independent values perform binary classification (that is, either jump or fall), keep getting error:
valueerror: shapes (69, 44, 2) , (69, 2, 2) incompatible
i'm not sure code changes size 44 columns down 2. code network i've constructed tensorflow tutorial mnist database has been modified address data/problem. main differences being, i'm using conv1d() instead of conv2d(), , i'm not using pooling layers. code follows:
def cnn_model_fn(features, labels, mode): """model function cnn.""" # input layer #-1 filled 69 later on #44 used independent vars, 1 must used input in tf.layers.conv1d() input_layer = tf.reshape(features["x"], [-1, 44, 1]) # convolutional layer #1 conv1 = tf.layers.conv1d( inputs=input_layer, filters=32, kernel_size=1, padding="same", activation=tf.nn.relu, name = "alan") # convolutional layer #2 conv2 = tf.layers.conv1d( inputs=conv1, filters=64, kernel_size=1, padding="same", activation=tf.nn.relu, name = "borris") # dense layer dense = tf.layers.dense(inputs=conv2, units=1024, activation=tf.nn.relu) dropout = tf.layers.dropout( inputs=dense, rate=0.4, training=mode == tf.estimator.modekeys.train) # logits layer logits = tf.layers.dense(inputs=dropout, units=2) print(logits) predictions = { # generate predictions (for predict , eval mode) "classes": tf.argmax(input=logits, axis=1), # add `softmax_tensor` graph. used predict , # `logging_hook`. "probabilities": tf.nn.softmax(logits, name="softmax_tensor") } if mode == tf.estimator.modekeys.predict: return tf.estimator.estimatorspec(mode=mode, predictions=predictions) # calculate loss (for both train , eval modes) onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32), depth=2) loss = tf.losses.softmax_cross_entropy( onehot_labels=onehot_labels, logits=logits) # configure training op (for train mode) if mode == tf.estimator.modekeys.train: optimizer = tf.train.adamoptimizer(learning_rate=0.001) train_op = optimizer.minimize( loss=loss, global_step=tf.train.get_global_step()) return tf.estimator.estimatorspec(mode=mode, loss=loss, train_op=train_op) # add evaluation metrics (for eval mode) eval_metric_ops = { "accuracy": tf.metrics.accuracy( labels=labels, predictions=predictions["classes"])} return tf.estimator.estimatorspec( mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
here problem occurs, run eval_input_fn:
jumpupdown_classifier = tf.estimator.estimator( model_fn=cnn_model_fn, model_dir="/tmp/jumpupdown_convnet_model") tensors_to_log = {"probabilities": "softmax_tensor"} logging_hook = tf.train.loggingtensorhook(tensors=tensors_to_log, every_n_iter=50) # train model train_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": mtraindat}, y=mtrainlab, batch_size=69, num_epochs=none, shuffle=true) jumpupdown_classifier.train( input_fn=train_input_fn, steps=20000, hooks=[logging_hook]) # evaluate model , print results eval_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": mtestdat}, y=mtestlab, num_epochs=1, shuffle=false) eval_results = jumpupdown_classifier.evaluate(input_fn=eval_input_fn) print(eval_results)
to clarify bit more, mtraindat looks following:
array([[ 0.23971076, 0.24687247, -0.22665831, ..., 0.73542702, -0.57439029, -0.22714323], [ 0.32290912, 0.61977148, -0.4258523 , ..., 0.73542702, 0.74438596, -0.57915074], [ 0.41255337, 0.2795991 , -0.56257713, ..., -0.30046052, 0.74438596, 0.74211949], ..., [ 0.56027043, 1.63016009, 1.36453617, ..., 0.57338732, 0.74438596, 0.56973094], [-0.1470048 , 1.23301661, 0.63612264, ..., 0.73542702, 0.58223492, 0.74211949], [-0.46544313, 1.02278185, 0.30256116, ..., -1.38631892, 0.74438596, 0.57966185]], dtype=float32)
where mtrainlab looks this:
array([[ 1., 0.], [ 1., 0.], [ 0., 1.], [ 1., 0.], [ 1., 0.], ........ [ 0., 1.], [ 0., 1.], [ 1., 0.]], dtype=float32)
and mtestdat , mtestlab similar, contain 6 rows.
i've tried changing values around in attempt diagnose problem, have been unsuccessful. appreciated.
Comments
Post a Comment