CoreML ARKit3
CoreML&ARKit3
大綱
Recognizing Objects in Live Capture
靜態圖片識別官方demo
ARKit3
Introducing ARKit 3
ARKit is the groundbreaking augmented reality (AR) platform for iOS that can transform how people connect with the world around them. Explore the state-of-the-art capabilities of ARKit 3 and discover the innovative foundation it provides for RealityKit. Learn how ARKit makes AR even more immersive through understanding of body position and movement for motion capture and people occlusion. Check out additions for multiple face tracking, collaborative session building, a coaching UI for on-boarding, and much more.
https://developer.apple.com/videos/play/wwdc2019/604/
跟蹤-場景理解-處理
SceneKit 3D場景
SpriteKit 2D游戲
Metal 類似openGL圖像處理
新增RealityKit
occlusion 阻塞 遮擋
可以把3D模型放到兩個人之間
通過機器學習分離場景和人物(Depth Estimation)
仔細看還是有點問題,因為后面的人應該在物體后面,而不應該遮擋物體
ARKit3可以做到人物遠近的分離,這樣多了一層就可以把物體放到中間
通過apple處理器,我們可以同步做到這樣,就是邊播放邊處理
API新增通過frameSemantics檢測人物。
.personSegmentation可以分割出人物,主要運用在人物站在3D模型前面的場景。
第二種是
.personSegmentationWithDepth可以分割出人物的遠近
一個遮擋花瓶的demo
Motion Capture 動作捕獲
通過.bodyDetection捕獲動作
新增了一個ARBodyTrackingConfiguration
同時使用前后攝像頭
將前置攝像頭采集的面部表情展現到后置攝像頭拍攝的場景上,并且可以同步跟蹤
兩個不同用戶可以共享觀察到的場景,在他們觀察到共同的場景關鍵點時開始融合
一個個點叫AR Anchor
ARKit3的人臉識別可以保存每個人ID的信息,在一個session中,如同一個人進入鏡頭后離開,再進入鏡頭可以獲得之前的ID。如果你重新初始化一個session,那么保留的信息將刪除。
另外就是場景理解的加強,增強了機器學習、物體檢測等技術。
錄制場景再重播
我們在使用ARKit時需要到特定的場景測試,這樣很不方便,這期提供了錄制場景的功能,可以重復測試。
在設置里會多一個選項
協作AR體驗
Building Collaborative AR Experiences
With iOS 13, ARKit and RealityKit enable apps to establish shared AR experiences faster and easier than ever. Understand how collaborative sessions allow multiple devices to build a combined world map and share AR anchors and updates in real-time. Learn how to incorporate collaborative sessions into ARKit-based apps, then roll into SwiftStrike, an engaging and immersive multiplayer AR game built using RealityKit and Swift.
https://developer.apple.com/videos/play/wwdc2019/610/
兩個用戶可以共享場景和放入場景的物體,并做到同步顯示
實現共享的兩個delegate方法,一個發送共享數據,一個接收到數據更新session
通過session的sessionIdentifier判斷添加的anchor是自己添加的還是另一個用戶添加的
用戶A在場景中添加了立方體,用戶B可以在另一個視角看到立方體的側面視角
把人物帶入AR
Bringing People into AR
ARKit 3 enables a revolutionary capability for robust integration of real people into AR scenes. Learn how apps can use live motion capture to animate virtual characters or be applied to 2D and 3D simulation. See how People Occlusion enables even more immersive AR experiences by enabling virtual content to pass behind people in the real world.
https://developer.apple.com/videos/play/wwdc2019/607/
將圖像通過機器學習分離出節點
如果你有相同結構的模型 就可以復用
CoreML&ARKit
Creating Great Apps Using Core ML and ARKit
Take a journey through the creation of an educational game that brings together Core ML, ARKit, and other app frameworks. Discover opportunities for magical interactions in your app through the power of machine learning. Gain a deeper understanding of approaches to solving challenging computer vision problems. See it all come to life in an interactive coding session.
https://developer.apple.com/videos/play/wwdc2019/228/
找到有多少個物體,并在物體位置畫一個框
檢測到有幾個骰子后,我們還想知道上面的點數
通過機器學習將數字識別和游戲相結合
也可以通過語音識別交互,即使沒有聯網
之后介紹了一款識別骰子數字的游戲,誰先走到9
Recognizing Objects in Live Capture
靜態圖片識別官方demo
https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml
動態圖片識別,用攝像頭識別官方demo
https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
下載其他Apple提供的模型庫
https://developer.apple.com/machine-learning/models/
With the Vision framework, you can recognize objects in live capture. Starting in iOS 12, macOS 10.14, and tvOS 12, Vision requests made with a Core ML model return results asVNRecognizedObjectObservation objects, which identify objects found in the captured scene.
Check the model parameters in Xcode to find out if your app requires a resolution smaller than 640 x 480 pixels.
Set the camera resolution to the nearest resolution that is greater than or equal to the resolution of images used in the model:
- (instancetype)initWithImage:(CVPixelBufferRef)image;
說明我們需要把相機拍攝的圖片或選取的圖片轉換成CVPixelBuffer類型傳入
有一點比較麻煩是需要將圖片處理成模型說明的尺寸。
如果我們創建一個VNCoreMLRequest就可以省去圖像裁剪這步驟,直接在completionHandler回調中獲取結果
3D模型檢測的demo
https://github.com/hanleyweng/CoreML-in-ARKit
demo測試和修改
https://github.com/gwh111/testARKitWithAction
總結
以上是生活随笔為你收集整理的CoreML ARKit3的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Linux下网络相关结构体 struct
- 下一篇: CE指针扫描