Edge AI in Action: Mastering On-Device Inference
Image credit: CVPR 2026Edge AI deploys artificial intelligence models directly on devices such as smartphones, cameras, sensors, drones, and wearables, allowing them to perform inference locally without relying on the cloud. This approach delivers key advantages, including lower latency, improved privacy, faster responsiveness, and greater energy efficiency.
However, running AI models on edge devices requires specialized tools for optimizing model performance, efficiency, and latency. While general-purpose frameworks offer broad compatibility, unlocking the full potential of hardware accelerators, especially those from Qualcomm and NVIDIA, requires a deeper understanding of platform-specific SDKs and engines.
In this CVPR 2026 tutorial, we will present a hands-on, practice-oriented guide to designing, optimizing, and deploying deep learning models on two of the most prominent edge AI platforms: Qualcomm Snapdragon and NVIDIA Jetson. With a focus on computer vision, we will explore real-world applications such as object detection and large language models.
We will showcase the use of leading tools and frameworks—including ONNX, TensorRT, Qualcomm SNPE, Qualcomm AI Runtime SDK, and NVIDIA's AI Stack across diverse hardware platforms such as Jabra PanaCast cameras, Qualcomm development boards, Android smartphones, and NVIDIA Jetson AGX Thor. Participants will gain practical insights into the full edge AI pipeline, from model design to real-time deployment.
Colorado Convention Center
700 14th St, Denver, 80202 Colorado
