本文将为您提供关于Pythoncv2ORB检测和计算返回“输入图像中的通道数无效”的详细介绍,我们还将为您解释python输出图像通道数的相关知识,同时,我们还将为您提供关于-->8_,ctrs,_=
本文将为您提供关于Python cv2 ORB检测和计算返回“输入图像中的通道数无效”的详细介绍,我们还将为您解释python输出图像通道数的相关知识,同时,我们还将为您提供关于--> 8_,ctrs,_=cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) ValueError: 没有足够的值来解包预期 3,得到 2、AttributeError: module cv2.cv2 has no attribute create ThinPlateSplineShapeTransformer 报错、AttributeError:模块'cv2.cv2'没有属性'createLBPHFaceRecognizer'、AttributeError:模块'cv2.cv2'没有属性'xfeatures2d'[Opencv 3.4.3]的实用信息。
本文目录一览:- Python cv2 ORB检测和计算返回“输入图像中的通道数无效”(python输出图像通道数)
- --> 8_,ctrs,_=cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) ValueError: 没有足够的值来解包预期 3,得到 2
- AttributeError: module cv2.cv2 has no attribute create ThinPlateSplineShapeTransformer 报错
- AttributeError:模块'cv2.cv2'没有属性'createLBPHFaceRecognizer'
- AttributeError:模块'cv2.cv2'没有属性'xfeatures2d'[Opencv 3.4.3]
Python cv2 ORB检测和计算返回“输入图像中的通道数无效”(python输出图像通道数)
如何解决Python cv2 ORB检测和计算返回“输入图像中的通道数无效”
我正在尝试从两个不同的图像中提取和匹配特征,但由于某种原因,“detectAndCompute”方法不适用于我的 orb 对象:
orb = cv2.ORB_create()
kp,corners = orb.detectAndCompute(image,None
我正在传递单个灰度图像(函数 np.float32(cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)) 的返回)。由于某种原因,程序返回以下错误:
回溯(最近一次调用最后一次):
文件“C:\\Users\\levxr\\Desktop\\Visual-positioning-bot-main\\alloverlay.py”,第 37 行,在
cv2.imshow("相机"+str(i),corn1.updateanddisplay())
文件“C:\\Users\\levxr\\Desktop\\Visual-positioning-bot-main\\features.py”,第 33 行,在 updateanddisplay
dst = self.update(image=self.image)
文件“C:\\Users\\levxr\\Desktop\\Visual-positioning-bot-main\\features.py”,第 23 行,更新中
kp,None)
cv2.error: OpenCV(4.4.0) c:\\users\\appveyor\\appdata\\local\\temp\\1\\pip-req-build-95hbg2jt\\opencv\\modules\\imgproc\\src\\color.simd_helpers.hpp:92: 错误: (-2:Unspecified error) in function ''__cdecl cv::impl::anonymous-namespace''::CvtHelper<struct cv::impl::
anonymous namespace''::Set,struct cv::impl::A0x2980c61a::Set,2>::CvtHelper(const class cv::_InputArray &,const class cv::_OutputArray &,int)''
输入图像中的通道数无效: ''VScn::包含(scn)'' 在哪里 ''scn'' 是 1
程序分为3个文件,alloverlay.py(主文件):
import sys
import cv2
import numpy as np
import features as corn
import camera as cali
cv2.ocl.setUSEOpenCL(False)
#videoname = input("enter input")
videoname = "camera10001-0200.mkv"
try:
videoname = int(videoname)
cap = cv2.VideoCapture(videoname)
except:
cap = cv2.VideoCapture(videoname)
videoname2 = "camera 20000-0200.mkv"
try:
videoname = int(videoname)
cap2 = cv2.VideoCapture(videoname)
except:
cap2 = cv2.VideoCapture(videoname)
if cap.isOpened()and cap2.isOpened():
ret1,image1 = cap.read()
ret2,image2 = cap2.read()
ret = [ret1,ret2]
image = [np.float32(cv2.cvtColor(image1,cv2.COLOR_BGR2GRAY)),np.float32(cv2.cvtColor(image2,cv2.COLOR_BGR2GRAY))]
cali1 = cali.Calibrator()
corn1 = corn.Corner_detector(image)
while cap.isOpened() and cap2.isOpened():
ret[0],image[0] = cap.read()
ret[1],image[1] = cap2.read()
if ret:
backupimg = image
for i,img in enumerate(image):
if cali1.calibrated:
backupimg[i] = corn1.image = cali1.undistort(np.float32(cv2.cvtColor(image[i],cali1.mtx,cali1.dist)
else:
backupimg[i] = corn1.image = np.float32(cv2.cvtColor(img,cv2.COLOR_BGR2GRAY))
cv2.imshow("camera "+str(i),corn1.updateanddisplay())
image = backupimg
print(ret,image)
#cv2.imshow("test",image)
key = cv2.waitKey(1)
if key == ord("c"):
cali1.calibrate(cali1.image)
if cv2.waitKey(25) & 0xFF == ord("q"):
break
else:
print("capture not reading")
break
cap.release()
,camera.py(用于校准和不失真相机并三角测量点的相对位置的模块(该项目的不同部分,与此问题无关)):
import sys
import cv2
#import glob
import numpy as np
cv2.ocl.setUSEOpenCL(False)
class Missing_calibration_data_error(Exception):
def __init__():
pass
class Calibrator():
def __init__(self,image=None,mtx=None,dist=None,camera_data={"pixelsize":None,"matrixsize":None,"baseline":None,"lens_distance":None},criteria=(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER,30,0.001),calibrated = False):
self.criteria = criteria
self.objpoints = []
self.imgpoints = []
self.objp = np.zeros((6*7,3),np.float32)
self.objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
self.image = image
self.mtx = mtx
self.dist = dist
self.calibrated = calibrated
self.pixelsize = camera_data["pixelsize"]
self.matrixsize = camera_data["matrixsize"]
self.baseline = camera_data["baseline"]
self.lens_distance = camera_data["lens_distance"]
def calibrate(self,image):
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
ret,corners = cv2.findChessboardCorners(gray,(7,6),None)
if ret == True:
self.objpoints.append(self.objp)
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),self.criteria)
self.imgpoints.append(corners2)
h,w = image.shape[:2]
ret,mtx,dist,rvecs,tvecs = cv2.calibrateCamera(objpoints,imgpoints,gray.shape[::-1],None,None)
self.mtx = mtx
self.dist = dist
self.calibrated = True
return mtx,dist
def undistort(self,image,dist):
if dist == None or mtx == None or image == None:
raise Missing_calibration_data_error
h,w = image.shape[:2]
newcameramtx,roi=cv2.getoptimalNewCameraMatrix(mtx,(w,h),1,h))
dst = cv2.undistort(image,newcameramtx)
x,y,w,h = roi
dst = dst[y:y+h,x:x+w]
return image
def calculate_point_relative_position(self,point_location2d):
angle = self.baseline/(point_location2d[left][x]-point_location2d[right][x])
x = angle * (point_location2d[left][x]-self.matrixsize[0]/2)
y = angle * (point_location2d[left][y]-self.matrixsize[1]/2)
z = self.lens_distance * (1-angle/self.pixelsize)
return [x,z]
´´´,and features.py(module to detect and match the features,aparently where the issue happens):
´´´
import sys
import cv2
import numpy as np
cv2.ocl.setUSEOpenCL(False)
class UnkNown_algorythm_error(Exception):
def __init__(self):
pass
class No_image_passed_error(Exception):
def __int__ (self):
pass
class Corner_detector():
def __init__(self,detectortype="ORB",corners=[]):
self.corners = corners
self.image = image
self.detectortype = detectortype
def update(self,image=None):
if self.detectortype == "Harris":
self.corners = cv2.cornerHarris(image,3,1)
elif self.detectortype == "Shi-Tomasi":
self.corners = cv2.goodFeaturesToTrack(image,1)
elif self.detectortype == "ORB":
orb = cv2.ORB_create()
kp,None)
elif self.detectortype == "SURF":
minHessian = 400
detector = cv2.features2d_SURF(hessianThreshold=minHessian)
keypoints1,descriptors1 = detector.detectAndCompute(img1,None)
keypoints2,descriptors2 = detector.detectAndCompute(img2,None)
else:
raise UnkNown_algoryth_error
return self.corners
def updateanddisplay(self):
dst = self.update(image=self.image)
self.image[dst>0.01*dst.max()] = 0
return self.image
class Feature_matcher():
def __init__(self,matcher = cv2.DescriptorMatcher_create(cv2.DescriptorMatcher_FLANNBASED)):
self.matcher = matcher
´´´
Does anyone kNow how to fix this? I''ve been looking for the answer for quite a while but i only find the answer for when you''re converting the image to grayscale and it doesnt work for me.
解决方法
很难理解,但我想我发现了问题:
您正在传递 orb.detectAndCompute
类型为 np.float32
的图像。
-
orb.detectAndCompute
不支持np.float32
类型的图像。
重现问题:
以下“简单测试”重现问题:
代码示例将黑色(零)图像传递给 orb.detectAndCompute
:
以下代码无异常通过(图片类型为np.uint8
):
# image type is uint8:
image = np.zeros((100,100),np.uint8)
orb = cv2.ORB_create()
kp,corners = orb.detectAndCompute(image,None)
以下代码引发异常,因为图像类型为 np.float32
:
# image type is float32:
image = np.float32(np.zeros((100,np.uint8))
orb = cv2.ORB_create()
kp,None)
引发异常:
输入图像中的通道数无效:
解决方案:
尽量避免 np.float32
转换。
您还可以将 image
转换为 uint8
,如下所示:
kp,corners = orb.detectAndCompute(image.astype(np.uint8),None)
--> 8_,ctrs,_=cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) ValueError: 没有足够的值来解包预期 3,得到 2
如何解决--> 8_,ctrs,_=cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) ValueError: 没有足够的值来解包预期 3,得到 2
我收到这样的错误:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-38-c01841a65106> in <module>
1 #assign "-"=10
----> 2 data=load_images_from_folder("D:/Handwritten-Equation-Solver-master (1)/Handwritten-Equation-Solver-master/extracted_images/-")
3 len(data)
4 for i in range(0,len(data)):
5 data[i]=np.append(data[i],["10"])
<ipython-input-36-de2a2236b032> in load_images_from_folder(folder)
6 if img is not None:
7 _,thresh=cv2.threshold(img,127,255,cv2.THRESH_BINARY)
----> 8 _,ctrs,_=cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
9 cnt=sorted(ctrs,key=lambda ctr:cv2.boundingRect(ctr)[0])
10 w=int(28)
ValueError: not enough values to unpack (expected 3,got 2)
请帮我解决这个问题。
AttributeError: module cv2.cv2 has no attribute create ThinPlateSplineShapeTransformer 报错
使用opencv-python报错:
AttributeError: module ''cv2.cv2'' has no attribute ''createThinPlateSplineShapeTransformer''
解决方法:
pip install opencv-contrib-python
AttributeError:模块'cv2.cv2'没有属性'createLBPHFaceRecognizer'
我在运行识别代码的面部时遇到一些属性错误。 我的脸检测代码完美运行。但是,当我尝试运行脸识别代码时,它显示一些属性错误。 我GOOGLE了,并试图遵循所有的步骤。 但是,它仍然显示相同的错误。 这是我的代码:
人脸识别
我得到以下错误:
如何获取systeminfo语言环境 – Locale.getDefault()
如何确定我的.NET Windows Forms程序运行在哪个监视器上?
猴子testing软件的Windows应用程序
在Windows c ++应用程序中获取主线程的控制权
如何从LAN上的另一台PC访问瓶子开发服务器?
C:UsersMANAppDataLocalProgramsPythonpython36python.exe C:/Users/MAN/PycharmProjects/facerecognition/Recognise/recognize1.py Traceback (most recent call last): File "C:/Users/MAN/PycharmProjects/facerecognition/Recognise/recognize1.py",line 4,in <module> recognizer = cv2.createLBPHFaceRecognizer() AttributeError: module ''cv2.cv2'' has no attribute ''createLBPHFaceRecognizer'' Process finished with exit code 1.
我正在使用Windows平台。 python 3.6版本。提前感谢。
在“窗口”对话框中使用“cout”显示消息 – C ++
处理python 3.0中的文件属性
Win32窗口线程安全吗?
从c#注册一个自定义的win32窗口类
win32:如何计算控制大小,以跨Windows版本/主题一致的外观?
你需要安装opencv-contrib
pip install opencv-contrib-python
它应该在那之后工作。
opencv已经改变了一些函数,并将它们移动到它们的opencv_contrib回购,所以你必须调用所提到的方法:
recognizer = cv2.face.createLBPHFaceRecognizer()
注意:您可以看到有关丢失文档的这个问题 。 尝试使用帮助功能help(cv2.face.createLBPHFaceRecognizer)了解更多详情。
使用以下内容
recognizer = **cv2.face.LBPHFaceRecognizer_create()**
安装完成后:
pip install opencv-contrib
如果使用蟒蛇然后在蟒蛇预防:
conda install pip
然后
pip install opencv-contrib
总结
以上是小编为你收集整理的AttributeError:模块''cv2.cv2''没有属性''createLBPHFaceRecognizer''全部内容。
如果觉得小编网站内容还不错,欢迎将小编网站推荐给好友。
AttributeError:模块'cv2.cv2'没有属性'xfeatures2d'[Opencv 3.4.3]
我已经安装了opencv 3.4.3(使用pip3 install opencv-python
和pip3 install opencv-python-contrib
)
当我运行包含此行的代码时:sift = cv2.xfeatures2d.SIFT_create()
我收到此错误:
AttributeError: module ''cv2.cv2'' has no attribute ''xfeatures2d''
是xfeatures2d
功能不再通过OpenCV的3.4.3支持?
答案1
小编典典您收到的错误消息与该模块xfeatures2d
不存在有关。它与SIFT算法没有直接关系,也与其中的任何算法都没有关系xfeatures2d
(所有都会发送该错误)。我建议您要么重新安装opencv-contrib-python
(pip install opencv-contrib-python),要么使用anaconda或同等工具从另一个源存储库重新安装两个opencv软件包。最后一个选择是,如果您愿意的话,可以自己编译完整的OpenCV(“常规”+ contrib)。
希望能帮助到你。
关于Python cv2 ORB检测和计算返回“输入图像中的通道数无效”和python输出图像通道数的介绍已经告一段落,感谢您的耐心阅读,如果想了解更多关于--> 8_,ctrs,_=cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) ValueError: 没有足够的值来解包预期 3,得到 2、AttributeError: module cv2.cv2 has no attribute create ThinPlateSplineShapeTransformer 报错、AttributeError:模块'cv2.cv2'没有属性'createLBPHFaceRecognizer'、AttributeError:模块'cv2.cv2'没有属性'xfeatures2d'[Opencv 3.4.3]的相关信息,请在本站寻找。
本文标签: