python Intel Realsense D435 多线程资源分配问题(卡住、卡死)
生活随笔
收集整理的這篇文章主要介紹了
python Intel Realsense D435 多线程资源分配问题(卡住、卡死)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
在使用python多線程調用Intel Realsense D435多個攝像頭時,發現pyrealsense的例如pipeline.start()、context.query_devices()函數會占用單個線程的較多資源,而python的多個線程都是在同一進程內的,所以一個線程占用資源較多,可能就會影響到其他線程,導致其他線程卡住
解決辦法
- 考慮多進程而不是多線程
- 考慮將可能占用較多資源的方法(函數)減少調用頻率,如增加睡眠時間等
相關問題:Tensorflow_yolov3 Intel Realsense D435奇怪的現象,多攝像頭連接時一旦能檢測到深度馬上就會卡(卡住)
相關代碼
# -*- coding: utf-8 -*- """ @File : 20200408_避障代碼落地優化_防卡頓.py.py @Time : 2020/4/8 13:54 @Author : Dontla @Email : sxana@qq.com @Software: PyCharm """# todo python多線程其中一個攝像頭掉線后頻繁報錯占用資源貌似會占用大量資源,從而影響到另外一個線程。。。會使另一個線程變卡, # 看不是不是可以用多進程方式,反正也不用數據共享,直接所有動作在一個進程里完成就好了 # 實現功能:障礙物檢測、發送告警信號 # 功能細節:障礙物檢測到超過x幀才告警,不是檢測出就馬上告警,防止誤檢測import socket import struct import threading import time import tracebackimport numpy as np import pyrealsense2 as rs import cv2 import sys from numba import jit# 參數配置 cam_serials = ['838212073806', '836612070984'] cam_num = len(cam_serials) ctx = rs.context() cam_width, cam_height = 848, 480 # 【傳輸分辨率】 threshold_dangerous_distance = 3000 # 【危險距離:單位mm】 distance_cam_vertical_to_cotton_top = 260 # 【攝像頭到棉花平面垂直距離(單位mm)】 factor_compensation_dangerous_distance = 1.5 # 【危險距離補償系數】用于讓最下面深度遠離臨界值,避免造成誤檢測 threshold_dangerous_scale = 0.05 # 【危險距離像素占比】 FOV_width, FOV_height = 69.4, 42.5 # 【攝像頭視場角(單位°)】# 【實際變換后height視場角】 if cam_height / cam_width < FOV_height / FOV_width:FOV_height_actual = FOV_width * cam_height / cam_width# print(FOV_height_actual) # 39.283018867924525 else:FOV_height_actual = FOV_height# 【計算過濾α值(distance_min為圖像最下方的深度,看到最近棉花的距離)】 # 當攝像頭到棉花頂垂直距離為800,最小距離為2256,當危險距離為2000時,alpha濾值為0.88 # 當攝像頭到棉花頂垂直距離為800,最小距離為2256,當危險距離為3000時,alpha濾值為1.32 # 所以,后面進行濾值時需判斷self.filter_alpha的值是否大于1(已添加進filter_alpha()函數中) distance_min = distance_cam_vertical_to_cotton_top / (np.tan(FOV_height_actual / 2 * np.pi / 180)) filter_alpha = threshold_dangerous_distance / distance_min * factor_compensation_dangerous_distance# 【UDP信號發送模塊】 # 遠程主機ip地址及端口 ip_port = ('192.168.1.49', 9000) udp_server_client = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # self.bytes_udp_pack = 1024 bytes_udp_pack = 65507# 發動告警信號給主機 def udp_send_image(img, pack_size, socket, ip_port):_, img_encode = cv2.imencode('.jpg', img)data = img_encode.tobytes()# print(len(data)) # 有很多超過65535的# 【定義文件頭、數據】(打包名為l?不是,l表示長整型,占四個字節)fhead = struct.pack('i', len(data))# 【發送文件頭、數據】socket.sendto(fhead, ip_port)# 每次發送x字節,計算所需發送次數send_times = len(data) // pack_size + 1for count in range(send_times):# time.sleep(0.01)if count < send_times - 1:socket.sendto(data[pack_size * count:pack_size * (count + 1)], ip_port)else:socket.sendto(data[pack_size * count:], ip_port)# alpha映射,用于將背景部分推出到障礙檢測范圍外 # @jit # 貌似開不了jit,不知啥原因,開了也沒明顯看到加速 def alpha_map(depth_image, filter_alpha):if filter_alpha > 1:# 獲取depth_image寬高h, w = depth_image.shape[0], depth_image.shape[1] # 360,640# 創建上下alpha(不同方法都能創建)# filter_upper = np.array([1] * int(h / 2))filter_upper = np.full(int(h / 2), 1)filter_lower = np.linspace(1, filter_alpha, h / 2)# 將filter_upper和filter_lower連在一起filter = np.r_[filter_upper, filter_lower]# print(filter)# print(filter.shape) # (360,)# print(filter_alpha_upper)# print(filter_alpha_upper.shape) # (180,)# print(filter_alpha_lower)# print(filter_alpha_lower.shape) # (180,)return (depth_image.T * filter).Telse:return depth_image# 如果要防止下面棉花過近被誤探測,可用兩層for循環設置梯度過濾 # 不過貌似還得中間對半分,下面直接舍棄掉,只用上面作為判斷,因為就算下面用了梯度...(還是得用梯度...) @jit def traversing_pixels(depth_image, threshold_dangerous_distance):num_dangerous = 0num_all_pixels = 0depth_image_ravel = depth_image.ravel()# depth_image_segmentation為分割后的圖像(紅藍兩色)depth_image_segmentation_ravel = []for pixel in depth_image_ravel:num_all_pixels += 1# 第一種效果要好一些(pixel==0意味著黑洞)if pixel < threshold_dangerous_distance and pixel != 0:# if pixel < threshold_dangerous_distance:num_dangerous += 1# 6000藍色?0紅色?depth_image_segmentation_ravel.append(6000)else:depth_image_segmentation_ravel.append(0)depth_image_segmentation = np.array(depth_image_segmentation_ravel).reshape(depth_image.shape)return num_all_pixels, num_dangerous, depth_image_segmentation# 【類】單個攝像頭幀傳輸線程 class CamThread(threading.Thread):def __init__(self, cam_serial):threading.Thread.__init__(self)self.cam_serial = cam_serial# 【類函數】def run(self):while True:try:print('攝像頭{}線程啟動:'.format(self.cam_serial))# 配置攝像頭并啟動流# self.cam_cfg(self.cam_serial) # 放函數里就不行了不知為什么?(因為那是局部變量啊傻子,只能在函數內使用)locals()['pipeline' + self.cam_serial] = rs.pipeline(ctx)locals()['config' + self.cam_serial] = rs.config()locals()['config' + self.cam_serial].enable_device(self.cam_serial)locals()['config' + self.cam_serial].enable_stream(rs.stream.depth, cam_width, cam_height,rs.format.z16, 30)locals()['config' + self.cam_serial].enable_stream(rs.stream.color, cam_width, cam_height,rs.format.bgr8, 30)locals()['pipeline' + self.cam_serial].start(locals()['config' + self.cam_serial])locals()['align' + self.cam_serial] = rs.align(rs.stream.color)# 從內存循環讀取攝像頭傳輸幀while True:# globals()['a'] += 1# print(globals()['a'])locals()['frames' + self.cam_serial] = locals()['pipeline' + self.cam_serial].wait_for_frames()locals()['aligned_frames' + self.cam_serial] = locals()['align' + self.cam_serial].process(locals()['frames' + self.cam_serial])locals()['aligned_depth_frame' + self.cam_serial] = locals()['aligned_frames' + self.cam_serial].get_depth_frame()# locals()['color_frame' + self.cam_serial] = locals()[# 'aligned_frames' + self.cam_serial].get_color_frame()# locals()['color_profile' + self.cam_serial] = locals()[# 'color_frame' + self.cam_serial].get_profile()# locals()['cvsprofile' + self.cam_serial] = rs.video_stream_profile(# locals()['color_profile' + self.cam_serial])# locals()['color_intrin' + self.cam_serial] = locals()[# 'cvsprofile' + self.cam_serial].get_intrinsics()# locals()['color_intrin_part' + self.cam_serial] = [locals()['color_intrin' + self.cam_serial].ppx,# locals()['color_intrin' + self.cam_serial].ppy,# locals()['color_intrin' + self.cam_serial].fx,# locals()['color_intrin' + self.cam_serial].fy]globals()['depth_image_raw' + self.cam_serial] = np.asanyarray(locals()['aligned_depth_frame' + self.cam_serial].get_data())# locals()['color_image' + self.cam_serial] = np.asanyarray(# locals()['color_frame' + self.cam_serial].get_data())except Exception:traceback.print_exc()# Dontla 20200326 下面這句主要擔心攝像頭掉線后,重新配置直到pipeline.start()時,攝像頭還未連上,然后又重新執行下面這句就會報管道無法在啟動之前關閉的錯誤,所以加個trytry:locals()['pipeline' + self.cam_serial].stop()except Exception:traceback.print_exc()passprint('攝像頭{}線程{}掉線重連:'.format(self.cam_serial, self.name))# 20200408 Dontla 要檢查攝像頭是否連上再去執行pipeline.start(),不然會卡住的count_try_times = 0break1 = Falsewhile True:time.sleep(0.5)try:count_try_times += 1print(count_try_times)for dev in ctx.query_devices():if self.cam_serial == dev.get_info(rs.camera_info.serial_number):break1 = Truebreakexcept Exception:passfinally:if break1:break# 【類】幀處理與顯示 class ImgProcess(threading.Thread):def __init__(self, cam_serial):threading.Thread.__init__(self)self.cam_serial = cam_serial# 【類函數】# todo Dontla 找出到底是哪兒卡住占用資源了def run(self):while True:try:if 'depth_image_raw{}'.format(self.cam_serial) not in globals():continue# 【阿爾法過濾】locals()['depth_image_alpha_filter{}'.format(self.cam_serial)] = alpha_map(globals()['depth_image_raw{}'.format(self.cam_serial)], filter_alpha)# 【遍歷深度圖像素值,如存在小于危險值范圍比例超過閾值,則告警】locals()['num_all_pixels{}'.format(self.cam_serial)], locals()['num_dangerous{}'.format(self.cam_serial)], locals()['depth_image_segmentation{}'.format(self.cam_serial)] = traversing_pixels(locals()['depth_image_alpha_filter{}'.format(self.cam_serial)], threshold_dangerous_distance)print('num_all_pixels:{}'.format(locals()['num_all_pixels{}'.format(self.cam_serial)]))print('num_dangerous:{}'.format(locals()['num_dangerous{}'.format(self.cam_serial)]))locals()['dangerous_scale{}'.format(self.cam_serial)] = \locals()['num_dangerous{}'.format(self.cam_serial)] / locals()['num_all_pixels{}'.format(self.cam_serial)]print('危險比例:{}'.format(locals()['dangerous_scale{}'.format(self.cam_serial)]))locals()['depth_colormap{}'.format(self.cam_serial)] = cv2.applyColorMap(cv2.convertScaleAbs(locals()['depth_image_segmentation{}'.format(self.cam_serial)], alpha=0.0425),cv2.COLORMAP_JET)# 注意: 窗口名不要用中文字符, 小心亂碼cv2.imshow('win{}'.format(self.cam_serial), locals()['depth_colormap{}'.format(self.cam_serial)])cv2.waitKey(1)except Exception:traceback.print_exc()pass# 【函數】攝像頭連續驗證、連續驗證機制 def cam_conti_veri(cam_num, ctx):# D·C 1911202:創建最大驗證次數max_veri_times;創建連續穩定值continuous_stable_value,用于判斷設備重置后是否處于穩定狀態max_veri_times = 100continuous_stable_value = 5print('\n', end='')print('開始連續驗證,連續驗證穩定值:{},最大驗證次數:{}:'.format(continuous_stable_value, max_veri_times))continuous_value = 0veri_times = 0while True:devices = ctx.query_devices()# for dev in devices:# print(dev.get_info(rs.camera_info.serial_number), dev.get_info(rs.camera_info.usb_type_descriptor))connected_cam_num = len(devices)print('攝像頭個數:{}'.format(connected_cam_num))if connected_cam_num == cam_num:continuous_value += 1if continuous_value == continuous_stable_value:breakelse:continuous_value = 0veri_times += 1if veri_times == max_veri_times:print("檢測超時,請檢查攝像頭連接!")sys.exit()# 【函數】循環reset攝像頭 def cam_hardware_reset(ctx, cam_serials):# hardware_reset()后是不是應該延遲一段時間?不延遲就會報錯print('\n', end='')print('開始初始化攝像頭:')for dev in ctx.query_devices():# 先將設備的序列號放進一個變量里,免得在下面for循環里訪問設備的信息過多(雖然不知道它會不會每次都重新訪問)dev_serial = dev.get_info(rs.camera_info.serial_number)# 匹配序列號,重置我們需重置的特定攝像頭(注意兩個for循環順序,哪個在外哪個在內很重要,不然會導致剛重置的攝像頭又被訪問導致報錯)for serial in cam_serials:if serial == dev_serial:dev.hardware_reset()# 像下面這條語句居然不會報錯,不是剛剛才重置了dev嗎?莫非區別在于沒有通過for循環ctx.query_devices()去訪問?# 是不是剛重置后可以通過ctx.query_devices()去查看有這個設備,但是卻沒有存儲設備地址?如果是這樣,# 也就能夠解釋為啥能夠通過len(ctx.query_devices())函數獲取設備數量,但訪問序列號等信息就會報錯的原因了print('攝像頭{}初始化成功'.format(dev.get_info(rs.camera_info.serial_number)))# 如果只有一個攝像頭,要讓它睡夠5秒(避免出錯,保險起見)time.sleep(5 / len(cam_serials))if __name__ == '__main__':# 連續驗證cam_conti_veri(cam_num, ctx)# 攝像頭重置cam_hardware_reset(ctx, cam_serials)# 連續驗證cam_conti_veri(cam_num, ctx)# 創建新線程for serial in cam_serials:locals()['CamThread_{}'.format(serial)] = CamThread(serial)locals()['ImgProcess_{}'.format(serial)] = ImgProcess(serial)# 開啟新線程for serial in cam_serials:locals()['CamThread_{}'.format(serial)].start()locals()['ImgProcess_{}'.format(serial)].start()# 阻塞主程序for serial in cam_serials:locals()['CamThread_{}'.format(serial)].join()print("退出主線程")總結
以上是生活随笔為你收集整理的python Intel Realsense D435 多线程资源分配问题(卡住、卡死)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: pycharm怎么查看代码结构,看函数定
- 下一篇: 三角函数公式、诱导公式