How to enable scan visualization in AR
Scan visualization gives users real-time feedback while they scan. Areas that have been sufficiently captured appear in full color, while incomplete areas remain striped. This makes it easier to understand scan coverage while recording.
This guide is for developers who are building custom scan capture flows in Unity, Swift, or Kotlin and want to add that feedback to their app. It shows how to enable raycast visualization, connect it to the scanning session, and render the striped overlay during capture so users can spot gaps before saving a scan.
Raycast visualization is the scan visualization mode that draws this striped overlay from the scanning session's raycast data. It is useful during development because it helps you confirm that scan coverage is complete, spot gaps before saving a scan, and verify that your capture UI is showing the visualization at the right time.

Figure: Raycast visualization overlays stripes on incomplete areas of a scan.
In Unity, scan visualization is enabled by turning on raycast visualization in AR Scanning Manager and rendering the result with the Unlit/NsdkScanningStripes shader. At a high level, a Unity implementation needs to:
- enable raycast visualization on
AR Scanning Manager - provide depth data for scanning
- render the raycast visualization textures with a material
- show that overlay only while scan visualization is active
The rest of this Unity walkthrough shows one runnable implementation of that flow using the Recording scene from nsdk-samples-csharp. In that sample-based implementation:
- the Recording scene already provides the capture lifecycle and UI
RecordingDemo.StartScanning()andRecordingDemo.StopScanning()continue to control when scanning starts and stops- the only new Unity-specific logic added in this guide is the visualization renderer
To enable scan visualization in Unity:
- Prepare your Unity project
- Enable raycast visualization in AR Scanning Manager
- Add a visualization renderer to your AR screen
- Start and stop the visualization with capture
- Update the visualization while AR frames arrive
Prepare your Unity project
This section shows how to start from the existing Unity Recording sample so you can build a runnable scan-visualization example on top of a working AR scene rather than setting up the full flow from scratch.
-
Clone nsdk-samples-csharp:
git clone https://github.com/nianticspatial/nsdk-samples-csharp.git -
Open the Unity samples project from that repo.
-
Open
Assets/Samples/Scanning/Scenes/Recording.unity. -
Build and run the sample on your device.
This walkthrough assumes:
- the Recording sample builds and runs on your device
- the scene already includes a basic AR setup
- you are working in the current URP-configured sample project
RecordingDemocontinues to own scan start and stop in the following runnable example
For general setup steps, see Setting up the Niantic SDK for Unity and Setting up a Basic AR Scene.
This runnable example uses a URP-compatible render pass, so you do not need to change the sample project's render pipeline settings.
Enable raycast visualization in AR Scanning Manager
This section shows how to configure the scanning component so it produces the raycast visualization data needed for the striped overlay. In your own Unity app, enable raycast visualization on AR Scanning Manager and make sure scanning has access to depth data.
In a Unity app that uses NSDK scanning:
- Select a GameObject that has
AR Scanning Manager, or add AR Scanning Manager to the GameObject that owns your scan flow. In the following runnable code, see step ⅰ for one implementation. - Enable Enable Raycast Visualization so the scanning system generates the raycast visualization textures. In the following runnable code, see step ⅱ for one implementation.
- Enable Record Estimated Depth if your app needs NSDK-generated depth. In the following runnable code, see step ⅲ for one implementation.
- If your device has LiDAR, you can leave Record Estimated Depth disabled and instead provide depth through an AR Occlusion Manager on your camera setup. In the following runnable code, see step ⅲ for one implementation.
- Keep the component's enabled state aligned with your app's scan lifecycle so visualization is only active while scanning is active. In the following runnable code, see step ⅳ for one implementation.
If your device has LiDAR, you can leave Record Estimated Depth disabled. Ensure that Prefer LiDAR if Available is enabled in Niantic SDK settings under XR Plug-in Management, and add an AR Occlusion Manager to the camera object used by your XR Origin so LiDAR depth is available to scanning.
Expand to reveal a minimal Unity example for scan-visualization setup
This example keeps the Unity sample changes to the minimum required for this step. It gives you a runnable Recording-scene baseline that is configured to produce raycast visualization data before any rendering code is added.
Use Assets/Samples/Scanning/Scenes/Recording.unity from nsdk-samples-csharp as the runnable baseline for this example.
- ⅰ. In the Recording scene, select the top-level AR Session GameObject, which already has AR Scanning Manager attached.
- ⅱ. Enable Enable Raycast Visualization.
- ⅲ. Enable Record Estimated Depth, or leave it disabled if your device has LiDAR and you have added an AR Occlusion Manager to the camera object used by your XR Origin.
- ⅳ. Leave
AR Scanning Managerdisabled in the scene soRecordingDemo.StartScanning()still controls when scanning begins.
At this stage, the sample is configured to produce raycast visualization data, but nothing is drawing that data on screen yet. The next section adds the material and camera component used to render it.
Add a visualization renderer to your AR screen
This section shows how to add the material and camera-side component that render scan visualization over the live AR view.
Follow this workflow to add the visualization renderer:
-
Create a new Unity
Materialasset for the scan overlay, then select that new asset in the Inspector and set its shader toUnlit > NsdkScanningStripes. This shader is provided by the NSDK package and composites the striped overlay from the textures produced by AR Scanning Manager. In the following runnable code, see step ⅱ for one implementation. -
Add scan-visualization renderer logic to a camera-side component. You can create a new script for scan visualization, or extend an existing camera or overlay script that already owns your AR rendering flow. The component should store the visualization material and AR Scanning Manager references, get
ARCameraManagerfrom the sameGameObject, and prepare the camera-side state that later sections use to render the overlay over the live AR view. In the following runnable example, see steps ⅲ and ⅳ for one implementation.The following code example shows the camera-side component shape used for this setup:
using NianticSpatial.NSDK.AR.Scanning;
using UnityEngine;
using UnityEngine.XR.ARFoundation;
public class ScanVisualization : MonoBehaviour
{
// The material that composites the scan visualization over the camera image.
[SerializeField] private Material _raycastVisualizationMaterial;
// The scanning manager that provides the raycast visualization textures.
[SerializeField] private ARScanningManager _arScanningManager;
// The camera manager on this same camera object.
private ARCameraManager _arCameraManager;
private void Start()
{
_arCameraManager = GetComponent<ARCameraManager>();
if (_arCameraManager == null || _arScanningManager == null || _raycastVisualizationMaterial == null)
{
Debug.LogError("Assign all required components and serialized fields.");
return;
}
}
}Expand the previous example to include the script file, camera attachment, material creation, and Inspector assignments if you want a runnable Unity version of this setup. In the following runnable example, see step ⅴ for one implementation.
-
Prepare the camera-side component for rendering. Confirm that its required references are assigned, get
ARCameraManagerfrom the sameGameObject, and initialize any state needed for the later rendering steps. In the following runnable example, see step ⅳ for one implementation. -
Add the camera-side component to the camera object that already owns
ARCameraManager. In the Recording sample, that camera object isXR Origin > Camera Offset > Main Camera. In the following runnable example, see step ⅴ for one implementation. -
In the Inspector, assign the material you created to Raycast Visualization Material, then assign the
ARScanningManagerreference from the object that owns scanning. In the Recording sample, drag the top-level AR Session GameObject into Ar Scanning Manager, or use the object picker and chooseAR Session (ARScanningManager). In the following runnable example, see steps ⅵ and ⅶ for one implementation.Keep the sample's existing scan lifecycle unchanged. At this stage, the material and camera-side component are wired together, but the overlay does not appear until the later sections connect rendering to scan state and AR frame updates.
Expand to reveal a minimal Unity example for renderer setup
This example builds on the previous step and adds only the material plus a minimal ScanVisualization component. The script validates its references and attaches to XR Origin > Camera Offset > Main Camera, but it does not render the overlay yet.
- ⅰ. In the Project window, create a new
Materialasset namedScanningStripesMaterial, select it, and set its shader toUnlit > NsdkScanningStripes. - ⅱ. In the Project window, create a new C# script named
ScanVisualization. - ⅲ. Replace the contents of
ScanVisualization.cswith the following code:
using NianticSpatial.NSDK.AR.Scanning;
using UnityEngine;
using UnityEngine.XR.ARFoundation;
public class ScanVisualization : MonoBehaviour
{
// The material that composites the scan visualization over the camera image.
[SerializeField] private Material _raycastVisualizationMaterial;
// The scanning manager that provides the raycast visualization textures.
[SerializeField] private ARScanningManager _arScanningManager;
// The camera manager on this same camera object.
private ARCameraManager _arCameraManager;
private void Start()
{
_arCameraManager = GetComponent<ARCameraManager>();
if (_arCameraManager == null || _arScanningManager == null || _raycastVisualizationMaterial == null)
{
Debug.LogError("Assign all required components and serialized fields.");
return;
}
}
}
- ⅳ. Add
ScanVisualizationtoXR Origin > Camera Offset > Main Camera. - ⅴ. In the Inspector, assign
ScanningStripesMaterialto Raycast Visualization Material. - ⅵ. In the Inspector, drag the top-level AR Session GameObject into Ar Scanning Manager, or use the object picker and choose
AR Session (ARScanningManager).
When you test this step:
- Enter Play mode in Unity or run the sample on your device.
- Confirm that the scene still opens normally and that the existing scan UI is still visible.
- Confirm that no striped overlay appears yet, because the later rendering steps have not been added.
- Optionally, verify that
Assign all required components and serialized fields.does not appear in the Unity Console or device logs.
Start and stop the visualization with capture
This section shows how scan visualization fits into the existing capture flow so the overlay appears when scanning starts and disappears when scanning stops. In the Recording sample, RecordingDemo already owns that lifecycle, so the visualization component added in this guide does not define new start or stop handlers.
Keep your existing capture flow responsible for turning scanning on and off. In nsdk-samples-csharp, that code lives in Assets/Samples/Scanning/Scripts/RecordingDemo.cs. The following code example shows the portion of the sample that starts and stops scanning:
private void HandleCameraPermissionGranted()
{
_arScanningManager.ScanRecordingFramerate = (int)_framerateSlider.value;
_arScanningManager.enabled = true;
}
public void StartScanning()
{
_sharePlaybackButton.gameObject.SetActive(false);
_maxTimePerChunkSlider.interactable = false;
_framerateSlider.interactable = false;
_saveScanPanel.SetActive(false);
CheckCameraPermission();
}
public async void StopScanning()
{
await _arScanningManager.SaveScan();
_arScanningManager.enabled = false;
_maxTimePerChunkSlider.interactable = true;
_framerateSlider.interactable = true;
_saveScanPanel.SetActive(true);
}
In the previous code example:
HandleCameraPermissionGranted()sets the scan framerate, then enablesARScanningManager, which is the state the visualization renderer later checks before drawing.StartScanning()leaves the sample's existing UI flow in place and does not need any visualization-specific branching.StopScanning()saves the scan, disablesARScanningManager, and returns control to the sample's save UI.- The Unity renderer added in this guide follows that existing lifecycle by rendering only while
ARScanningManager.enabledandEnableRaycastVisualizationare both true.
This means the next section only needs to observe scan state and update the overlay while capture is active.
Validate this step with the following workflow:
- Run the Recording scene on your device or in the Unity Editor with Playback enabled.
- Tap Start Scan to begin scanning.
- Confirm that the sample's existing scan UI responds normally and that scanning begins without any new visualization-specific buttons or errors.
- Tap Stop Scan to end scanning.
- Confirm that the sample returns to its existing save or export UI flow.
- Optionally, verify in the Unity Console or in device logs that no scan-save error appears while
RecordingDemo.StopScanning()runs.
Update the visualization while AR frames arrive
This section shows how the visualization updates continuously as camera frames arrive so the overlay reflects the latest scan coverage in real time. It also explains how the ScanVisualization component keeps the raycast data aligned with the camera image before compositing it in a URP render pass.
This section builds on the camera-side renderer setup from the previous steps and assumes:
- the project uses the current URP-configured sample project
- the visualization material uses the
Unlit/NsdkScanningStripesshader
- Update the camera-side renderer so it subscribes to
ARCameraManager.frameReceived, copies camera images into a short queue, and enqueues a URP render pass that composites the latest raycast texture. The following code example shows one implementation. In the following runnable example, see step ⅰ for one implementation:
private void Start()
{
// Reuse the camera and ARCameraManager already attached to this object.
_camera = GetComponent<Camera>();
_arCameraManager = GetComponent<ARCameraManager>();
if (_camera == null || _arCameraManager == null || _arScanningManager == null || _raycastVisualizationMaterial == null)
{
Debug.LogError("Assign all required components and serialized fields.");
return;
}
// Prepare the render-pass state, then listen for new camera frames.
_cameraTexturesQueue = new Queue<Texture2D>();
_renderPass = new ScanVisualizationRenderPass();
_fullScreenMesh = CreateFullScreenMesh();
_arCameraManager.frameReceived += OnARCameraFrameReceived;
RenderPipelineManager.beginCameraRendering += EnqueueUniversalRenderPass;
}
private void OnARCameraFrameReceived(ARCameraFrameEventArgs args)
{
// Only queue frames while scan visualization is active.
if (!(_arScanningManager.enabled && _arScanningManager.EnableRaycastVisualization))
{
return;
}
// Copy the latest camera image into a Texture2D, then enqueue it.
var newTexture = CopyLatestCameraFrame(args);
if (newTexture == null)
{
return;
}
EnqueueCameraTexture(newTexture);
}
private void EnqueueUniversalRenderPass(ScriptableRenderContext context, Camera currentCamera)
{
// Enqueue the URP pass only for this camera while visualization is active.
if (currentCamera != _camera ||
!(_arScanningManager.enabled && _arScanningManager.EnableRaycastVisualization) ||
_cameraTexturesQueue.Count <= Delay)
{
return;
}
UpdateVisualizationMaterial();
_renderPass.Material = _raycastVisualizationMaterial;
_renderPass.Mesh = _fullScreenMesh;
currentCamera.GetUniversalAdditionalCameraData().scriptableRenderer.EnqueuePass(_renderPass);
}
private void OnDestroy()
{
if (_arCameraManager != null)
{
_arCameraManager.frameReceived -= OnARCameraFrameReceived;
}
RenderPipelineManager.beginCameraRendering -= EnqueueUniversalRenderPass;
while (_cameraTexturesQueue != null && _cameraTexturesQueue.Count > 0)
{
Destroy(_cameraTexturesQueue.Dequeue());
}
}
}
In the previous code example:
OnARCameraFrameReceived()captures camera images into a short queue only while scanning is active and raycast visualization is enabled- GetRaycastColorTexture() provides the visualization texture used by the shader
EnqueueUniversalRenderPass()updates the material and schedules the URP pass that draws the striped overlay for the active cameraCopyLatestCameraFrame(),EnqueueCameraTexture(),UpdateVisualizationMaterial(),CreateFullScreenMesh(), andScanVisualizationRenderPassare helper members that the runnable example defines so the main workflow can stay focused on the rendering flow- the two-frame delay helps keep the raycast data aligned with the camera image
- Validate this step with the following workflow. In the following runnable example, see step ⅱ for one implementation:
- Run the scene on your device or in the Unity Editor with Playback enabled.
- Tap Start Scan to begin scanning.
- Confirm that diagonal stripes appear over the camera feed.
- Scan more of the scene, then confirm that covered areas transition from striped to full color.
- Tap Stop Scan, then confirm that the visualization disappears when scanning ends.
If you also want to save the recorded scan data, call SaveScan() on AR Scanning Manager before disabling it. For more information, see How to create playback datasets.
Runnable example: Update Unity scan visualization each frame
This example builds on the previous sections and replaces the placeholder ScanVisualization.cs file with a runnable version that renders the striped overlay while the Recording sample is scanning.
Use Assets/Samples/Scanning/Scenes/Recording.unity as the runnable baseline for this example.
- ⅰ. Replace the contents of
ScanVisualization.cswith the following code:
using System;
using System.Collections.Generic;
using NianticSpatial.NSDK.AR.Scanning;
using NianticSpatial.NSDK.AR.Utilities;
using Unity.Collections;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.RenderGraphModule;
using UnityEngine.Rendering.Universal;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class ScanVisualization : MonoBehaviour
{
// The material that composites the striped visualization overlay.
[SerializeField] private Material _raycastVisualizationMaterial;
// The scanning manager that provides the raycast visualization texture.
[SerializeField] private ARScanningManager _arScanningManager;
// The camera this component renders for.
private Camera _camera;
// The AR camera manager already attached to the same camera object.
private ARCameraManager _arCameraManager;
// The delayed camera frame currently bound to the visualization material.
private Texture2D _currentCameraTexture;
// A short queue used to keep the camera image aligned with the raycast texture.
private Queue<Texture2D> _cameraTexturesQueue;
// A full-screen mesh used by the URP render pass.
private Mesh _fullScreenMesh;
// The URP render pass that draws the visualization overlay.
private ScanVisualizationRenderPass _renderPass;
private const int Delay = 2;
private void Start()
{
// Reuse the camera and ARCameraManager already attached to this object.
_camera = GetComponent<Camera>();
_arCameraManager = GetComponent<ARCameraManager>();
if (_camera == null || _arCameraManager == null || _arScanningManager == null || _raycastVisualizationMaterial == null)
{
Debug.LogError("Assign all required components and serialized fields.");
return;
}
// Prepare the render-pass state, then listen for new camera frames.
_cameraTexturesQueue = new Queue<Texture2D>();
_fullScreenMesh = CreateFullScreenMesh();
_renderPass = new ScanVisualizationRenderPass();
_arCameraManager.frameReceived += OnARCameraFrameReceived;
RenderPipelineManager.beginCameraRendering += EnqueueUniversalRenderPass;
Debug.Log("ScanVisualization: renderer setup validated.");
}
private void OnARCameraFrameReceived(ARCameraFrameEventArgs args)
{
// Only queue frames while scan visualization is active.
if (!(_arScanningManager.enabled && _arScanningManager.EnableRaycastVisualization))
{
return;
}
#if UNITY_EDITOR
// In the Editor, copy the simulated camera texture directly.
if (args.textures.Count == 0)
{
return;
}
var sourceTexture = args.textures[0];
var newTexture = new Texture2D(
sourceTexture.width,
sourceTexture.height,
sourceTexture.format,
sourceTexture.mipmapCount > 1);
Graphics.CopyTexture(sourceTexture, 0, 0, newTexture, 0, 0);
#else
// On device, convert the latest CPU camera image into an RGBA texture.
if (!_arCameraManager.TryAcquireLatestCpuImage(out XRCpuImage image))
{
return;
}
var newTexture = new Texture2D(image.width, image.height, TextureFormat.RGBA32, false);
var conversionParams = new XRCpuImage.ConversionParams(
image,
TextureFormat.RGBA32,
XRCpuImage.Transformation.None);
var rawTextureData = newTexture.GetRawTextureData<byte>();
try
{
image.Convert(conversionParams, new NativeSlice<byte>(rawTextureData));
newTexture.Apply();
}
finally
{
image.Dispose();
}
#endif
while (_cameraTexturesQueue.Count > Delay + 1)
{
Destroy(_cameraTexturesQueue.Dequeue());
}
_cameraTexturesQueue.Enqueue(newTexture);
}
private void EnqueueUniversalRenderPass(ScriptableRenderContext context, Camera currentCamera)
{
// Enqueue the URP pass only for this camera while visualization is active.
if (currentCamera != _camera ||
!(_arScanningManager.enabled && _arScanningManager.EnableRaycastVisualization) ||
_cameraTexturesQueue.Count <= Delay)
{
return;
}
UpdateVisualizationMaterial();
_renderPass.Material = _raycastVisualizationMaterial;
_renderPass.Mesh = _fullScreenMesh;
currentCamera.GetUniversalAdditionalCameraData().scriptableRenderer.EnqueuePass(_renderPass);
}
private void UpdateVisualizationMaterial()
{
// Advance to the delayed camera frame that should match the latest raycast texture.
if (_currentCameraTexture != null)
{
Destroy(_currentCameraTexture);
}
_currentCameraTexture = _cameraTexturesQueue.Dequeue();
_raycastVisualizationMaterial.SetTexture("_MainTex", _currentCameraTexture);
_raycastVisualizationMaterial.SetTexture("_ColorTex", _arScanningManager.GetRaycastColorTexture());
_raycastVisualizationMaterial.SetInt("_ScreenOrientation", (int)XRDisplayContext.GetScreenOrientation());
_raycastVisualizationMaterial.SetTexture("_ArCameraTex", _currentCameraTexture);
}
private void EnqueueCameraTexture(Texture2D newTexture)
{
// Keep the queue short so old camera textures do not accumulate.
while (_cameraTexturesQueue.Count > Delay + 1)
{
Destroy(_cameraTexturesQueue.Dequeue());
}
_cameraTexturesQueue.Enqueue(newTexture);
}
private static Mesh CreateFullScreenMesh()
{
// Create a full-screen quad for the URP render pass to draw.
var mesh = new Mesh
{
vertices = new[]
{
new Vector3(0f, 0f, -1f),
new Vector3(0f, 1f, -1f),
new Vector3(1f, 1f, -1f),
new Vector3(1f, 0f, -1f),
},
uv = new[]
{
new Vector2(0f, 0f),
new Vector2(0f, 1f),
new Vector2(1f, 1f),
new Vector2(1f, 0f),
},
triangles = new[] {0, 1, 2, 0, 2, 3}
};
mesh.UploadMeshData(false);
return mesh;
}
private void OnDestroy()
{
if (_arCameraManager != null)
{
_arCameraManager.frameReceived -= OnARCameraFrameReceived;
}
RenderPipelineManager.beginCameraRendering -= EnqueueUniversalRenderPass;
if (_currentCameraTexture != null)
{
Destroy(_currentCameraTexture);
}
if (_fullScreenMesh != null)
{
Destroy(_fullScreenMesh);
}
while (_cameraTexturesQueue != null && _cameraTexturesQueue.Count > 0)
{
Destroy(_cameraTexturesQueue.Dequeue());
}
}
private sealed class ScanVisualizationRenderPass : ScriptableRenderPass
{
public Material Material { get; set; }
public Mesh Mesh { get; set; }
private static readonly Matrix4x4 s_projection = Matrix4x4.Ortho(0f, 1f, 0f, 1f, 0f, 1f);
private class PassData
{
public UniversalCameraData CameraData;
public UniversalResourceData ResourceData;
public Material Material;
public Mesh Mesh;
}
public ScanVisualizationRenderPass()
{
// Draw after the camera background and scene geometry are already visible.
profilingSampler = new ProfilingSampler("Scan Visualization");
renderPassEvent = RenderPassEvent.AfterRenderingTransparents;
}
public override void RecordRenderGraph(RenderGraph renderGraph, ContextContainer frameData)
{
// Unity 6 uses Render Graph by default, so record the full-screen overlay here.
using var builder =
renderGraph.AddRasterRenderPass<PassData>("Scan Visualization", out var passData, profilingSampler);
passData.CameraData = frameData.Get<UniversalCameraData>();
passData.ResourceData = frameData.Get<UniversalResourceData>();
passData.Material = Material;
passData.Mesh = Mesh;
builder.SetRenderAttachment(passData.ResourceData.activeColorTexture, 0);
builder.SetRenderFunc((PassData data, RasterGraphContext renderContext) =>
{
var cmd = renderContext.cmd;
cmd.SetViewProjectionMatrices(Matrix4x4.identity, s_projection);
cmd.DrawMesh(data.Mesh, Matrix4x4.identity, data.Material);
cmd.SetViewProjectionMatrices(
data.CameraData.camera.worldToCameraMatrix,
data.CameraData.camera.projectionMatrix);
});
}
[Obsolete("This rendering path is for compatibility mode only (when Render Graph is disabled). Use Render Graph API instead.", false)]
public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
// Keep a compatibility path for projects where Render Graph is disabled.
var cmd = CommandBufferPool.Get("Scan Visualization");
using (new ProfilingScope(cmd, profilingSampler))
{
cmd.SetViewProjectionMatrices(Matrix4x4.identity, s_projection);
cmd.DrawMesh(Mesh, Matrix4x4.identity, Material);
cmd.SetViewProjectionMatrices(
renderingData.cameraData.camera.worldToCameraMatrix,
renderingData.cameraData.camera.projectionMatrix);
}
context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);
}
}
}
-
ⅱ. Validate the runnable example:
- Run the Recording scene on your device or in the Unity Editor with Playback enabled.
- Confirm that
ScanVisualization: renderer setup validated.appears in the Unity Console or inadb logcat -s Unity | grep "ScanVisualization". - Tap Start Scan to begin scanning.
- Wait a few seconds for the visualization to begin updating.
- Confirm that diagonal stripes appear over the camera feed.
- Move the device to scan more of the scene, then confirm that covered areas transition from striped to full color.
- Tap Stop Scan, then confirm that the visualization disappears when scanning ends.
Troubleshooting
Visualization doesn't appear
- Confirm that scanning is active in the Recording sample.
- Ensure that Enable Raycast Visualization is enabled on the
AR Scanning Managercomponent on AR Session. - Verify that the
ScanVisualizationscript is attached to the camera object used by your XR Origin and that all serialized fields are assigned. - Check that
ScanningStripesMaterialuses theUnlit > NsdkScanningStripesshader. - Confirm that
ScanVisualization: renderer setup validated.appears before you tap Start. - If your project uses a different Unity or URP version than the sample, verify that the render-pass APIs used in
ScanVisualization.csare available and compatible. - Verify that depth is available. If LiDAR is unavailable, enable Record Estimated Depth. If LiDAR is available, add AR Occlusion Manager to the camera object used by your XR Origin.
Overlay doesn't update
- If the sample's Start Scan Button responds but the overlay stays blank, verify that
RecordingDemo.StartScanning()is enabling theAR Scanning Managerreferenced from AR Session and that Enable Raycast Visualization is still turned on in that component. - Check that
OnARCameraFrameReceived()is receiving frames while scanning is active and that_cameraTexturesQueuegrows beyond the two-frame delay beforeEnqueueUniversalRenderPass()tries to schedule the URP pass. - If the overlay appears misaligned, confirm that
UpdateVisualizationMaterial()is still assigning the queued camera texture to both_MainTexand_ArCameraTexbefore the render pass is enqueued. - If the overlay flashes briefly and disappears, make sure
RecordingDemo.StopScanning()is not being called from another UI path and that the queue is not being emptied while capture is still active.
Performance issues
- Scan visualization processes each recorded camera frame, so lower-end devices may show reduced performance while scanning.
- Lower the recording framerate on
AR Scanning Managerif needed. - Avoid running other expensive AR features such as meshing or device mapping at the same time unless necessary.
- Using a shallower depth range reduces compute cost.