Intel Realsense D435 python (Python Wrapper)example00: NumPy Integration 将深度帧数据转换为 Numpy 数组进行处理
生活随笔
收集整理的這篇文章主要介紹了
Intel Realsense D435 python (Python Wrapper)example00: NumPy Integration 将深度帧数据转换为 Numpy 数组进行处理
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
NumPy Integration:
Librealsense frames support the buffer protocol. A numpy array can be constructed using this protocol with no data marshalling overhead:
Numpy集成:
librealsense幀支持緩沖區(qū)協(xié)議。可以使用此協(xié)議構(gòu)造numpy數(shù)組,而無需數(shù)據(jù)編組開銷:
將深度幀數(shù)據(jù)轉(zhuǎn)換為 Numpy 數(shù)組進(jìn)行處理:
import numpy as npdepth_data = depth.as_frame().get_data()""" as_frame(self: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.frame """# 可以說 .as_frame()用了跟沒用一樣嗎?"""get_data(self: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.BufDataRetrieve data from the frame handle.""" print('depth_data 的類型:', type(depth_data))# depth_data 的類型: <class 'pyrealsense2.pyrealsense2.BufData'>print(depth_data)# < pyrealsense2.pyrealsense2.BufDataobjectat0x0000024F5D07BA40 >np_image = np.asanyarray(depth_data)print('np_image 的類型:', type(np_image))# print('np_image:', np_image)print('np_image 的大小:', np_image.shape)# np_image的類型: <class 'numpy.ndarray'># (480, 640)應(yīng)用到 Intel Realsense D435 python (Python Wrapper)example00: streaming using rs.pipeline(235) 中,就是:
# First import the library import pyrealsense2 as rspipeline = rs.pipeline() """ # Create a context object. This object owns the handles to all connected realsense devices # 創(chuàng)建pipeline對象 # The caller can provide a context created by the application, usually for playback or testing purposes. """pipeline.start() """ start(*args, **kwargs) Overloaded function.1. start(self: pyrealsense2.pyrealsense2.pipeline, config: rs2::config) -> rs2::pipeline_profileStart the pipeline streaming according to the configuraion. The pipeline streaming loop captures samples from the device, and delivers them to the attached computer vision modules and processing blocks, according to each module requirements and threading model. During the loop execution, the application can access the camera streams by calling wait_for_frames() or poll_for_frames(). The streaming loop runs until the pipeline is stopped. Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised(引發(fā)異常). The pipeline selects and activates the device upon start, according to configuration or a default configuration. When the rs2::config is provided to the method, the pipeline tries to activate the config resolve() result. If the application requests are conflicting with pipeline computer vision modules or no matching device is available on the platform, the method fails. Available configurations and devices may change between config resolve() call and pipeline start, in case devices are connected or disconnected, or another application acquires ownership of a device. 2. start(self: pyrealsense2.pyrealsense2.pipeline) -> rs2::pipeline_profileStart the pipeline streaming with its default configuration. The pipeline streaming loop captures samples from the device, and delivers them to the attached computer vision modules and processing blocks, according to each module requirements and threading model. During the loop execution, the application can access the camera streams by calling wait_for_frames() or poll_for_frames(). The streaming loop runs until the pipeline is stopped. Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised. 3. start(self: pyrealsense2.pyrealsense2.pipeline, callback: Callable[[pyrealsense2.pyrealsense2.frame], None]) -> rs2::pipeline_profile Start the pipeline streaming with its default configuration. The pipeline captures samples from the device, and delivers them to the through the provided frame callback. Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised. When starting the pipeline with a callback both wait_for_frames() and poll_for_frames() will throw exception.4. start(self: pyrealsense2.pyrealsense2.pipeline, config: rs2::config, callback: Callable[[ pyrealsense2.pyrealsense2.frame], None]) -> rs2::pipeline_profile Start the pipeline streaming according to the configuraion. The pipeline captures samples from the device, and delivers them to the through the provided frame callback. Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised. When starting the pipeline with a callback both wait_for_frames() and poll_for_frames() will throw exception. The pipeline selects and activates the device upon start, according to configuration or a default configuration. When the rs2::config is provided to the method, the pipeline tries to activate the config resolve() result. If the application requests are conflicting with pipeline computer vision modules or no matching device is available on the platform, the method fails. Available configurations and devices may change between config resolve() call and pipeline start, in case devices are connected or disconnected, or another application acquires ownership of a device. """try:while True:# Create a pipeline object. This object configures the streaming camera and owns it's handleframes = pipeline.wait_for_frames()"""wait_for_frames(self: pyrealsense2.pyrealsense2.pipeline, timeout_ms: int=5000) -> pyrealsense2.pyrealsense2.composite_frame Wait until a new set of frames becomes available. The frames set includes time-synchronized frames of each enabled stream in the pipeline. In case of(若在......情況下) different frame rates of the streams, the frames set include a matching frame of the slow stream, which may have been included in previous frames set. The method blocks(阻塞) the calling thread, and fetches(拿來、取來) the latest unread frames set. Device frames, which were produced while the function wasn't called, are dropped(被扔掉). To avoid frame drops(丟幀、掉幀), this method should be called as fast as the device frame rate. The application can maintain the frames handles to defer(推遲) processing. However, if the application maintains too long history, the device may lack memory resources to produce new frames, and the following call to this method shall fail to retrieve(檢索、取回) new frames, until resources become available. """depth = frames.get_depth_frame()"""get_depth_frame(self: pyrealsense2.pyrealsense2.composite_frame) -> rs2::depth_frameRetrieve the first depth frame, if no frame is found, return an empty frame instance."""print(type(frames))# <class 'pyrealsense2.pyrealsense2.composite_frame'>print(type(depth))# <class 'pyrealsense2.pyrealsense2.depth_frame'>print(frames)# <pyrealsense2.pyrealsense2.composite_frame object at 0x000001E4D0AAB7D8>print(depth)# <pyrealsense2.pyrealsense2.depth_frame object at 0x000001E4D0C4B228>import numpy as npdepth_data = depth.as_frame().get_data()print('depth_data 的類型:', type(depth_data))# depth_data 的類型: <class 'pyrealsense2.pyrealsense2.BufData'>print(depth_data)# < pyrealsense2.pyrealsense2.BufDataobjectat0x0000024F5D07BA40 >np_image = np.asanyarray(depth_data)print('np_image 的類型:', type(np_image))# print('np_image:', np_image)print('np_image 的大小:', np_image.shape)# np_image的類型: <class 'numpy.ndarray'># (480, 640)# 如果沒有接收到深度幀,跳過執(zhí)行下一輪循環(huán)if not depth:continueprint('not depth:', not depth)# not depth: False# 如果 depth 為空(False),則 not depth 為True,如果 depth 不為空(True),則 not depth 為False# Print a simple text-based representation of the image, by breaking it into 10x20 pixel regions and# approximating the coverage of pixels within one metercoverage = [0] * 64print(type(coverage))# <class 'list'>print(coverage)# [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,# 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]for y in range(480):for x in range(640):# 獲取當(dāng)前深度圖像(x, y)坐標(biāo)像素的深度數(shù)據(jù)dist = depth.get_distance(x, y)"""get_distance(self: pyrealsense2.pyrealsense2.depth_frame, x: int, y: int) -> floatProvide the depth in meters at the given pixel"""# 如果當(dāng)前坐標(biāo)(x, y)像素的深度在1m范圍以內(nèi),將其所負(fù)責(zé)的列表元素變量加1。(如:x在0到9范圍內(nèi)負(fù)責(zé)列表元素coverage[0])if 0 < dist and dist < 1:# x方向上每10個像素寬度整合為一個新的像素區(qū)域(最后整合成 640/10=64 個新像素值),將符合深度要求的點加起來作統(tǒng)計。coverage[x // 10] += 1# y方向上每20個像素寬度整合為一個新的像素區(qū)域(最后整合成 480/20=24 個新像素值)if y % 20 is 19:line = ""# coverage 列表中元素最大值為200(該區(qū)域內(nèi)【10×20】所有像素點都在所給深度范圍內(nèi))for c in coverage:# c//25的最大值為8# 用所占顏色空間由小到大的文本來近似復(fù)現(xiàn)深度圖像line += " .:nhBXWW"[c // 25]# 重置coverage列表coverage = [0] * 64print(line)finally:pipeline.stop()總結(jié)
以上是生活随笔為你收集整理的Intel Realsense D435 python (Python Wrapper)example00: NumPy Integration 将深度帧数据转换为 Numpy 数组进行处理的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: python:if not 的使用方法与
- 下一篇: Intel Realsense D435