Advanced AR Effects and Precise Localization
Getting the site position in AR Space
After tracking the site with VPS2, we will receive localization updates from NSDK in the session.anchorUpdated subject.
By subscribing to this subject, we receive the latest AR poses of the individual tracked anchors. Since our application only navigates to a single site,
we can simply expose the transform matrix directly from the anchor update.
session.anchorUpdated
.receive(on: DispatchQueue.main)
.sink { [weak self] id, update in
guard let self else { return }
if let poiAnchorId = self.poiAnchorId, poiAnchorId != id {
return
}
self.anchorWorldTransform = update.trackingData?.targetAnchorTransform
}
.store(in: &cancellables)
The anchorWorldTransform is the 4x4 transform matrix that corresponds to the best estimated pose of the target site. In order to render its position in ARKit, we need to add an AnchorEntity to RealityKit's Scene. Since the position can change given localization updates, we create the VPS2SceneContent class to manage creation and updating of this AnchorEntity.
// Creating the Anchor
// Note: in VPS2SceneContent, we use a dictionary to store the AnchorEntity for each anchorId, even though we only track a single anchor.
// This simplifies the reasoning for when we might be tracking more than one anchor or when we switch the anchor being tracked.
private var anchors: [NSDKVpsAnchorId: AnchorEntity] = [:]
func getOrCreateAnchor(anchorId: NSDKVpsAnchorId) -> AnchorEntity? {
guard let scene else { return nil }
if let existing = anchors[anchorId] { return existing }
let anchor = AnchorEntity(world: .zero)
anchor.name = String(describing: anchorId)
scene.addAnchor(anchor)
anchors[anchorId] = anchor
return anchor
}
// Updating the Anchor
func setAnchorTransform(anchorId: NSDKVpsAnchorId, matrix: simd_float4x4) {
guard let anchor = anchors[anchorId] else { return }
anchor.transform = Transform(matrix: matrix)
}
Drawing the AR Navigation
After we obtain the site transform, we can render the effects needed for AR Navigation. The sample implements two effects: (1) rendering a small marker at the destination and (2) rendering a series of chevron arrows on the ground to indicate the direction to the destination.
Destination Marker
The destination marker is straightforward to render: it is a 2D billboard that shows the location pin icon. For the full implementation, see DestinationMarkerEntity.swift.
final class DestinationMarkerEntity: Entity {
init(view: NSDKView) {
super.init()
build(view: view)
}
required init() {
super.init()
}
private func build(view: NSDKView) {
// Configure texture, material, geometry and billboarding.
// See DestinationMarkerEntity.swift for the full implementation.
}
}
We parent this marker to the anchor entity we created earlier to position it at the site's location.
func createDestinationMarker(view: NSDKView) {
guard let scene else { return }
let marker = DestinationMarkerEntity(view: view)
let anchor = AnchorEntity(world: .zero)
anchor.isEnabled = false
anchor.addChild(marker)
scene.addAnchor(anchor)
destinationMarkerAnchor = anchor
}
Chevron Arrows
The Chevron arrows are slightly more complex. They are a sequence of Chevron shapes that lie on the detected floor and point towards the destination.
First, the arrows themselves are constructed in ChevronGroupEntity.swift. Each arrow is constructed as a pair of ModelEntities, one for the left arm and one for the right arm. For the full implementation, see ChevronGroupEntity.swift.
final class ChevronGroupEntity: Entity {
init() {
super.init()
build()
}
required init() {
super.init()
}
private func build() {
// Configure the chevrons.
// See ChevronGroupEntity.swift for the full implementation.
}
}
In order to render this entity at the right position, we need to use plane detection to find the height of the ground plane. The horizontal planes are published in the ARManager; we receive them in the view controller and forward them to the VPS2SceneContent. (We also need the camera transform to position the arrows directly below the camera.)
arManager.frameState.$camera
.combineLatest(arManager.frameState.$horizontalPlanes)
.receive(on: DispatchQueue.main)
.sink { [weak self] camera, planes in
guard let self, let cam = camera else { return }
let t = cam.transform
self.sceneContent?.updatePoiArrow(cameraTransform: t, horizontalPlanes: planes)
self.sceneContent?.updateDestinationMarker(cameraTransform: t)
}
.store(in: &cancellables)
Using the detected planes, we can filter for the detected ground and find the best transform offset to match. We can then use this ground offset place the ChevronGroupEntity below the camera, and then point them towards the destination.
func updatePoiArrow(cameraTransform: simd_float4x4, horizontalPlanes: [UUID: ARPlaneAnchor]) {
// Omitted checks for data availability
// Finding the best ground plane
let planesBelow = horizontalPlanes.values.filter {
$0.transform.columns.3.y < cameraPosition.y - 0.3
}
let classifiedFloor = planesBelow
.filter { $0.classification == .floor }
.max(by: { $0.transform.columns.3.y < $1.transform.columns.3.y })
let fallbackFloor = planesBelow
.max(by: { $0.transform.columns.3.y < $1.transform.columns.3.y })
let groundY: Float
if let best = classifiedFloor ?? fallbackFloor {
groundY = best.transform.columns.3.y + 0.01
} else {
groundY = cameraPosition.y - 1.0
}
// Setting position of the chevron arrow entity
let arrowPosition = SIMD3<Float>(cameraPosition.x, groundY, cameraPosition.z)
arrow.position = arrowPosition
// Pointing the chevron arrow towards the destination
let poiWorldPosition = poiAnchor.position(relativeTo: nil)
let toPoiFlat = simd_normalize(SIMD3<Float>(
poiWorldPosition.x - arrowPosition.x,
0,
poiWorldPosition.z - arrowPosition.z
))
arrow.orientation = simd_quatf(from: [0, 0, -1], to: toPoiFlat)
}
Detecting precise localization
The trackingState of VPS2 reports the accuracy of the localization. This is exposed as part of the anchorUpdated event from the VPS2Session. By exposing this in the view model, we can use it to transition the view between coarse guidance mode and precise localization alignment mode.
// class VPS2ViewModel
@Published private(set) var anchorTrackingState: VpsAnchorUpdate.AnchorTrackingState?
init(session: NSDKVps2Session, frameState: ARFrameState) {
session.anchorUpdated
.receive(on: DispatchQueue.main)
.sink { [weak self] id, update in
// Other processing code
self.anchorTrackingState = update.trackingState
}
}
// class VPS2ARViewController, the tracking state is exposed to the view controller and used to update the view.
private func bindAnchorTrackingState() {
//... Other binding code ...
viewModel.$anchorTrackingState
.receive(on: DispatchQueue.main)
.sink { [weak self] trackingState in
guard let self, let trackingState else { return }
self.sceneContent?.updatePoiVisibility(trackingState: trackingState)
}
.store(in: &cancellables)
}
// class VPS2SceneContent, the visibility of markers are updated based on the tracking state.
func updatePoiVisibility(trackingState: VpsAnchorUpdate.AnchorTrackingState) {
lastPoiUpdateType = trackingState
guard let id = poiAnchorId else { return }
guard anchors[id] != nil else { return }
switch trackingState {
case .notTracked:
coarseMarker.isEnabled = false
preciseMarker.isEnabled = false
destinationMarkerAnchor?.isEnabled = false
case .limited:
coarseMarker.isEnabled = true
preciseMarker.isEnabled = false
destinationMarkerAnchor?.isEnabled = true
case .tracked:
coarseMarker.isEnabled = false
preciseMarker.isEnabled = true
destinationMarkerAnchor?.isEnabled = true
}
}
Downloading mesh
A VPS2 site may contain a 3d mesh for that site. In order to render the mesh at the site, aligned with the real world positions during precise localization, we need to first download the mesh asset. This is actually done ahead of time to minimize the latency. The acquireMeshDownloader method enables downloading a mesh asset from a given anchor payload.
class MeshLoader {
private var meshDownloadTask: Task<Void, Never>?
func start(
anchorPayload: String,
session: NSDKSession,
onLoaded: @MainActor @escaping (MeshDownloaderResults) -> Void
) {
cancel()
let downloader = session.acquireMeshDownloader()
meshDownloadTask = Task { [anchorPayload, downloader] in
defer { session.destroy(downloader) }
do {
let results = try await downloader.requestLocationMesh(
payload: anchorPayload,
getTexture: false
)
try Task.checkCancellation()
await onLoaded(results)
} catch is CancellationError {
// expected
} catch {
print("Failed to download mesh: \(error)")
}
}
}
}
We expose the mesh loader in the view model for easy access from the view controller:
func startMeshDownload(anchorPayload: String, session: NSDKSession,
onLoaded: @MainActor @escaping (MeshDownloaderResults) -> Void) {
meshLoader.start(anchorPayload: anchorPayload, session: session, onLoaded: onLoaded)
}
func cancelMeshDownload() {
meshLoader.cancel()
}
Placing the downloaded mesh at aligned position
It is straightforward to place the downloaded mesh at the real world position of the site. The mesh is already aligned with the localized transform of the anchor, so we simply need to create a ModelEntity and parent it to the anchor entity we created earlier.
func loadMeshIntoPreciseMarker(_ meshResults: MeshDownloaderResults) {
preciseMarker.children.removeAll()
loadMesh(meshResults, to: preciseMarker)
}
private func loadMesh(_ meshResults: MeshDownloaderResults, to parent: Entity) {
var material = UnlitMaterial(color: .white)
for result in meshResults.results {
guard let meshResource = result.meshData.toWireframeMeshResource() else { continue }
let modelEntity = ModelEntity(mesh: meshResource, materials: [material])
modelEntity.transform = Transform(matrix: result.transform)
parent.addChild(modelEntity)
}
}
Drawing the mesh with wireframe effect
In order to showcase alignment, we want to draw the mesh with a wireframe effect. The mesh is a standard 3D textured mesh that must be processed to generate a wireframe representation. See MeshDataExtension.swift for the full implementation - our approach is to replace the edges of each triangle with a thin, flat quad.
extension MeshData {
public func toWireframeMeshResource() -> MeshResource? {
// Get positions of the vertices
let numSourceVertices = verticesPtr.count / 3
var sourcePositions = [SIMD3<Float>]()
sourcePositions.reserveCapacity(numSourceVertices)
for i in 0..<numSourceVertices {
let idx = i * 3
sourcePositions.append(SIMD3<Float>(
verticesPtr[idx],
-verticesPtr[idx + 1],
-verticesPtr[idx + 2]
))
}
// Create the wireframe mesh
var outPositions = [SIMD3<Float>]()
var outIndices = [UInt32]()
let triangleCount = indicesPtr.count / 3
for t in 0..<triangleCount {
// identify the edges
// create quads for each edge
// append new positions and indices to the output
// for full implementation, see MeshDataExtension.swift.
}
// Create the mesh resource
var descriptor = MeshDescriptor(name: "WireframeMeshDescriptor")
descriptor.positions = MeshBuffers.Positions(outPositions)
descriptor.primitives = .triangles(outIndices)
do {
return try MeshResource.generate(from: [descriptor])
} catch {
return nil
}
}
}
Recall that in Detecting precise localization, we toggle the visibility of the coarse marker and the precise marker based on the tracking state. This will present the wireframe mesh when the user achieves precise localization, and show the chevron guidance when the user is further away. With both of these features implemented, the AR navigation experience is complete.