algorithm - Time Complexity single loop vs multiple sequential loops -
today, me , colleague had small argument 1 particular code snippet. code looks this. @ least, imagined be.
for(int = 0; < n; i++) { // operations here } (int = 0; < n; i++) { // more operations here }
he wanted me remove second loop, since cause performance issues.
however, sure since don't have nested loops here, complexity o(n), no matter how many sequential loops put ( 2 had ).
his argument if n
1,000,000 , loop takes 5 seconds, code take 10 seconds, since has 2 loops. confused after statement.
what remember dsa lessons ignore such constants while calculating big oh.
what missing here?
sidenote
in actual code, second loop run m
times m
smaller n
. big n
value, m
small.
my colleague did not notice , hence pointed out performance issue.
yes,
complexity theory may compare 2 distinct methods of calculation in [?time][?space]
,
but
do not use [ptime]
complexity argument poor efficiency
fact #1: o( f(n) )
relevant comparing complexities, in areas near n ~ infty
, process principal limits being possible compared "there"
fact #2: given n ~ { 10k | 10m | 10g }
, none of such cases meets above cited condition
fact #3: if process ( algorithm ) allows loops merged without side-effects ( on resources / blocking / etc ) single pass, single-loop processing may benefit reduced looping overheads.
a micro benchmark decide, not o( f( n ) )
n ~ infty
as many additional effects stronger influence - better or poor cache-line alignment , amount of possible l1/l2/l3-cache re-uses, smart harnessing of more / less cpu-registers - of driven possible compiler-optimisations , may further increase code-execution speeds small n
-s, beyond expectations above.
so,
perform several scaling-dependent microbenchmarking, before resorting argue limits of o( f( n ) )
always do.
Comments
Post a Comment