gpgpu - Monte Carlo with rates, system simulation with CUDA C++ -


so trying simulate 1-d physical model named tasep.

i wrote code simulate system in c++, need performance boost.

the model simple ( c++ code below ) - array of 1's , 0's. 1 represent particle , 0 no-particle, meaning empty. particle moves 1 element right, @ rate 1, if element empty. particle @ last location disappear @ rate beta ( 0.3 ). finally, if first location empty particle appear there, @ rate alpha.

one threaded easy, pick element @ random, , act probability 1 / alpha / beta, written above. can take lot of time.

so tried similar thing many threads, using gpu, , raised lot of questions:

  1. is using gpu , cuda @ idea such thing?

  2. how many threads should have? can have thread each site ( 10e+6 ), should i?

  3. how synchronize access memory between different threads? used atomic operations far.

  4. what right way generate random data? if use million threads ok have random generator each?

  5. how take care of rates?

i new cuda. managed run code cuda samples , tutorials. although have code of above ( still gives strange result though ), not put here, because think questions more general.

so here c++ 1 threaded version of it:

int tasep() {     const int l = 750000;      // rates     int alpha = 330;     int beta  = 300;     int probabilitynormalizer = 1000;      bool system[l];     int pos = 0;     initarray(system); // init 0's , 1's      /* loop */     (int j = 0; j < 10*l*l; j++)     {         unsigned long randomnumber = xorshf96();         pos = (randomnumber % (l)); // pick random location in the array          if (pos == 0 && system[0] == 0) // first site , empty             system[0] = (alpha > (xorshf96() % probabilitynormalizer)); // insert particle chance alpha          else if (pos == l - 1) // last site             system[l - 1] = system[l - 1] && (beta < (xorshf96() % probabilitynormalizer)); // remove particle if exists chance beta          else if (system[pos] && !system[pos + 1]) // if current location have particle , next 1 empty - jump right         {             system[pos] = false;             system[pos + 1] = true;         }         if ((j % 1000) == 0) // loggingg             log(system, j);     }      getchar();     return 0; } 

i grateful whoever willing , give his/her advice.

i think goal perform called monte carlo simulations. have failed understand main objective (i.e. frequency, or average power lost, etc.)

question 01

since asked random data, believe can have multiple random seeds (maybe 1 each thread), advise generate seed in gpu using pseudo random generator (you can use same cpu), store seeds in gpu global memory , launch many threads can using dynamic parallelism. so, yes cuda suitable approach, keep in mind balance between time require learn , how time need result current code. if take use knowledge in future, learn cuda maybe worth, or if can escalate code in many gpus , taking time in cpu , need solve equation worth too. looks close, if simple 1 time result, advise let cpu solve it, because probably, experience, take more time learning cuda cpu take solve (imho).

question 02

the number of threads usual question rookies. answer dependent of project, taking in code insight, take many can, using every thread different seed. suggestion use registers call "sites" (be aware strong limitations) , run multiples loops evaluate particle, in same idea of car tire bad road (data in smem), l limited 255 per loop (avoid spill @ cost project, , less registers means more warps per block). create perturbation, load vectors in shared memory, 1 alpha (short), 1 beta (short) (i assume different distributions), 1 "exist or not particle" in next site (char), , 2 combine pseudo generator source threadid, blockid, , current time info (to pick initial alpha, beta , exist or not) u can reuse rates every thread in block, , since data not change (only reading position change) have sync once after reading, can "random pick perturbation position , reuse data. initial values can loaded global memory , "refreshed" after specific number of loops hide loading latency. in short, reuse same data in shared multiple times, values selected every thread change @ every interaction due pseudo random value. taking in account talking large numbers , can load different data in every block, pseudo random algorithm should enough. also, can use result stored in gpu previous runs random source, flip 1 variable , bit operations, u can use every bit particle.

question 03

for specific project recommend avoid thread cooperation , make these independent. but, can use shuffle inside same warp, no high cost.

question 04

it hard generate random data, should worry how last period (since generator has period of random , them repeats). suggest use single generator can work in parallel kernel , use feed kernels (you can use dynamic paralelism). in case since want random should not worry lot consistency. gave example of pseudo random data use in previous question, may assist. keep in mind there no real random generator, there alternatives internet bits example.

question 05

already explained in question 03, keep in mind not need full sequence of values, enough part use in multiple forms, give enough time kernel process , can refresh sequence, if guarantee not feed block same sequence hard fall patterns.

hope have help, i’m working cuda bit more year, started you, , still every week improve code, enough. see how fit statistical challenge: cluster random things.

good luck!


Comments

Popular posts from this blog

angular - Ionic slides - dynamically add slides before and after -

minify - Minimizing css files -

Add a dynamic header in angular 2 http provider -