python - How to train pytorch model with numpy data and batch size? -


i learning basics of pytorch , thought create simple 4 layer nerual network dropout train iris dataset classification. after refering many tutorials wrote code.

import pandas pd sklearn.datasets import load_iris import torch torch.autograd import variable  epochs=300 batch_size=20 lr=0.01  #loading data numpy array data = load_iris() x=data.data y=pd.get_dummies(data.target).values  #convert tensor x= variable(torch.from_numpy(x), requires_grad=false) y=variable(torch.from_numpy(y), requires_grad=false) print(x.size(),y.size())  #neural net model model = torch.nn.sequential(     torch.nn.linear(4, 10),     torch.nn.relu(),     torch.nn.dropout(),     torch.nn.linear(10, 5),     torch.nn.relu(),     torch.nn.dropout(),     torch.nn.linear(5, 3),     torch.nn.softmax() )  print(model)  # loss , optimizer optimizer = torch.optim.adam(model.parameters(), lr=lr)   loss_func = torch.nn.crossentropyloss()    in range(epochs):     # forward pass     y_pred = model(x)      # compute , print loss.     loss = loss_func(y_pred, y)     print(i, loss.data[0])      # before backward pass, use optimizer object 0 of     # gradients variables update (which learnable weights     # of model)     optimizer.zero_grad()      # backward pass     loss.backward()      # calling step function on optimizer makes update parameters     optimizer.step() 

there 2 problems facing.

  1. i want set batch size of 20. how should this?
  2. at step y_pred = model(x) showing error

error

 typeerror: addmm_ received invalid combination of arguments - got (int, int, torch.doubletensor, torch.floattensor), expected 1 of:  * (torch.doubletensor mat1, torch.doubletensor mat2)  * (torch.sparsedoubletensor mat1, torch.doubletensor mat2)  * (float beta, torch.doubletensor mat1, torch.doubletensor mat2)  * (float alpha, torch.doubletensor mat1, torch.doubletensor mat2)  * (float beta, torch.sparsedoubletensor mat1, torch.doubletensor mat2)  * (float alpha, torch.sparsedoubletensor mat1, torch.doubletensor mat2)  * (float beta, float alpha, torch.doubletensor mat1, torch.doubletensor mat2)       didn't match because of arguments have invalid types: (int, int, torch.doubletensor, !torch.floattensor!)  * (float beta, float alpha, torch.sparsedoubletensor mat1, torch.doubletensor mat2)       didn't match because of arguments have invalid types: (int, int, !torch.doubletensor!, !torch.floattensor!) 

probably same issue: pytorch: convert floattensor doubletensor

in short: when converting numpy values stored in doubletensor, while optimizer requires floattensor. have change 1 of them.


Comments

Popular posts from this blog

angular - Ionic slides - dynamically add slides before and after -

minify - Minimizing css files -

Add a dynamic header in angular 2 http provider -