Prelude: I haven't found a Unity event I can listen to for screen size / orientation changes. I'm reluctantly polling the AR camera screen size in Update() but I'd much rather have a lightweight, only-as-needed mechanism. Any recommendations? [as in "I do this and it works well", not "try <random thing>"] I'm not currently using Unity UI components. Adding it and listening to rect transform changes on a redundant UI component is not what I'd call a lightweight approach. (Saw that forum thread already) Problem (ARFoundation devs?): For an app I'm currently working on, users tap anywhere on the screen to confirm they want to place an object where a placeholder is being displayed at the centre of the viewport. The placeholder is only displayed if an ARFoundation raycast through the centre of the viewport hits a tracked plane. The planes are not displayed for usability / aesthetic reasons. Mobile devices get rotated and may support window resizing (e.g. Android splitscreen), so getting the screen size and midpoint once at startup and always using that same midpoint is not robust. Changes are relatively rare but do occur. I was going to switch to the new ARRaycastManager.AddRaycast() function but it only supports screen coordinates (pixels), not viewport coordinates ([0.0, 1.0]). The viewport midpoint coordinates I raycast are always (0.5, 0.5), regardless of the screen configuration. But I have to poll the screen size because AddRaycast() needs the midpoint in screen coordinates. A viewport version of AddRaycast() would eliminate that pesky polling. Alternatively, I could poll less often (say every 10-60 mS) but it's still a constant unnecessary drain in my graphics-heavy app. And if I don't respond at close to the Update() rate, it's probably going to feel laggy. Or, I could use Raycast(Ray, List<ARRaycastHit>, TrackableType) in Update(), but I'd still be doing work initialising the Ray every frame and passing it to the ARRaycastManager. It's much more efficient for the Ray to be passed in only if the screen size / rotation changes (rarely), not every frame, or polling interval. Efficiency is clearly desirable on mobile, particularly for battery-depleting, CPU/GPU-throttling AR apps. The docs for AddRaycast() say: "it is similar to repeating the same raycast query each frame, but platforms with direct support for this feature can provide better results." Adding viewport-based AddRaycast() could bring more efficiency to other platforms. I'm guessing AddRaycast() may be tailored for HoloLens and/or MagicLeap where screen orientation / size possibly don't change and fixed-position reticules may have OS / hardware support (tipoff: the second AddRaycast() parameter?). Maybe overload or drop the second parameter for the viewport version of AddRaycast?