opencv - Key-point Detection and Image Stitching -


output of code[![][1]]2

so shown in image below, have key points detected on image output image after wrap perspective neglects first image on left side, cannot figure out why !

    import numpy np     import imutils     import cv2  class stitcher: def __init__(self):     # determine if using opencv v3.x     self.isv3 = imutils.is_cv3()  def stitch(self, imagea,imageb, ratio=0.75, reprojthresh=10.0,     showmatches=false):     # unpack images, detect keypoints , extract     # local invariant descriptors them     #(imageb, imagea) = images     (kpsa, featuresa) = self.detectanddescribe(imagea)     (kpsb, featuresb) = self.detectanddescribe(imageb)      # match features between 2 images     m = self.matchkeypoints(kpsa, kpsb,         featuresa, featuresb, ratio, reprojthresh)      # if match none, there aren't enough matched     # keypoints create panorama     if m none:         return none      # otherwise, apply perspective warp stitch images     #     (matches, h, status) = m     #print(m)     #print(matches)     #print(h)     #print(status)     #cv2.imwrite('intermediate.jpg',matches)     result = cv2.warpperspective(imagea, h,         (imagea.shape[1] + imageb.shape[1], imagea.shape[0]))     result[0:imageb.shape[0], 0:imageb.shape[1]] = imageb     #cv2.imshow('intermediate',result)      # check see if keypoint matches should visualized     if showmatches:         vis = self.drawmatches(imagea, imageb, kpsa, kpsb, matches,             status)          # return tuple of stitched image ,         # visualization         return (result, vis)      # return stitched image     return result  def detectanddescribe(self, image):     # convert image grayscale     gray = cv2.cvtcolor(image, cv2.color_bgr2gray)      # check see if using opencv 3.x     if self.isv3:         # detect , extract features image         #sift algorithm         descriptor = cv2.xfeatures2d.sift_create()         #surf algorithm         #descriptor = cv2.xfeatures2d.surf_create()# 400 hesian threshold, optimum values should around 300-500         #upright surf: faster , can used panorama stiching i.e our case.         #descriptor.upright = true         print(descriptor.descriptorsize())         (kps, features) = descriptor.detectandcompute(image, none)         print(len(kps),features.shape)      # otherwise, using opencv 2.4.x     else:         # detect keypoints in image         detector = cv2.featuredetector_create("sift")         kps = detector.detect(gray)          # extract features image         extractor = cv2.descriptorextractor_create("sift")         (kps, features) = extractor.compute(gray, kps)      # convert keypoints keypoint objects numpy     # arrays     kps = np.float32([kp.pt kp in kps])      # return tuple of keypoints , features     #print("features",features)     return (kps, features)  def matchkeypoints(self, kpsa, kpsb, featuresa, featuresb,     ratio, reprojthresh):     # compute raw matches , initialize list of actual     # matches     matcher = cv2.descriptormatcher_create("bruteforce")     rawmatches = matcher.knnmatch(featuresa, featuresb, 2)     matches = []      # loop on raw matches     m in rawmatches:         # ensure distance within ratio of each         # other (i.e. lowe's ratio test)         if len(m) == 2 , m[0].distance < m[1].distance * ratio:             matches.append((m[0].trainidx, m[0].queryidx))     print(len(matches))      # computing homography requires @ least 4 matches     if len(matches) > 4:         # construct 2 sets of points         ptsa = np.float32([kpsa[i] (_, i) in matches])         ptsb = np.float32([kpsb[i] (i, _) in matches])          # compute homography between 2 sets of points         (h, status) = cv2.findhomography(ptsa, ptsb, cv2.ransac,             reprojthresh)          # return matches along homograpy matrix         # , status of each matched point         return (matches, h, status)      # otherwise, no homograpy computed     return none  def drawmatches(self, imagea, imageb, kpsa, kpsb, matches, status):     # initialize output visualization image     (ha, wa) = imagea.shape[:2]     (hb, wb) = imageb.shape[:2]     vis = np.zeros((max(ha, hb), wa + wb, 3), dtype="uint8")     vis[0:ha, 0:wa] = imagea     vis[0:hb, wa:] = imageb      # loop on matches     ((trainidx, queryidx), s) in zip(matches, status):         # process match if keypoint         # matched         if s == 1:             # draw match             pta = (int(kpsa[queryidx][0]), int(kpsa[queryidx][1]))             ptb = (int(kpsb[trainidx][0]) + wa, int(kpsb[trainidx][1]))             cv2.line(vis, pta, ptb, (0, 255, 0), 1)      # return visualization     return vis 

above code used key point detection , stitching,

one more question if can me vertical image stitching other rotating images , performing horizontal stitching.

thanks lot !

enter image description here

i changed code , used @alexander's padtransf.warpperspectivepadded function, perform wrapping , blending ! can me getting lighting uniform output image?

i had issue myself. if not mistaken using this blog reference.

the issue warpperspective in regards line:

result = cv2.warpperspective(imagea, h,         (imagea.shape[1] + imageb.shape[1], imagea.shape[0]))     result[0:imageb.shape[0], 0:imageb.shape[1]] = imageb 

this method omnidirectional. mean stitching imagea on imageb replacing pixel values based on width , height represented .shape[0] , .shape[1]. solved in c++ , therefor don't have python code show can give run down of must done.

  1. get 4 corners of each of images using.
  2. get min , max corners each image found in step 1.
  3. create mat "htr" used map image 1 result in line warped image two. htr.at(0,2) represnts location in mats 3x3 matrix. numpy need use here.
mat htr = mat::eye(3,3,cv_64f);     if (min_x < 0){         max_x = image2.size().width - min_x;         htr.at<double>(0,2)= -min_x;     }     if (min_y < 0){         max_y = image2.size().height - min_y;         htr.at<double>(1,2)= -min_y;     } 
  1. perform perspective transform on 4 corners of each image see end in space.
perspectivetransform(vector<point2f> fourpointimage1, vector<point2f> image1dst, htr*homography); perspectivetransform(vector<point2f> fourpointimage2, vector<point2f> image2dst, htr); 
  1. get min , max values image1dst 4 corners , iamge2dst 4 corners.
  2. get min , max of image1dst , iamge2dst , use create new blank image of correct size hold final stitched images.
  3. repeat step 3 process time determine translation needed adjust 4 corners of each image make sure moved virtual space of blank image
  4. finally throw in actual images homographies have found/made.
warpperspective(image1, blankimage, (translation*homography),result.size(), inter_linear,border_constant,(0)); warpperspective(image2, image2updated, translation, result.size(), inter_linear, border_constant,   (0)); 

the end goal , result determine images warped can make blank image hold entirety of stitched images nothing cropped out. once have done pre-processing stitch images together. hope helps , if have questions holler.


Comments

Popular posts from this blog

angular - Ionic slides - dynamically add slides before and after -

minify - Minimizing css files -

Add a dynamic header in angular 2 http provider -