Apr 2026
13 Mon
14 Tue
15 Wed
16 Thu
17 Fri 09:00 AM – 06:00 PM IST
18 Sat 08:45 AM – 05:45 PM IST
19 Sun
Priyanshu Verma
@priyanshuv
Submitted Mar 13, 2026
Deploying ML models to edge devices often means integrating multiple platform-specific runtimes such as CoreML, OpenVINO, DirectML, and lightweight mobile engines. This talk explores how Rust can serve as a portable systems layer for building and shipping cross-platform ML inference while keeping performance overhead low.
Talk Overview
Rust as the portability layer
Rust’s FFI story in practice
Minimizing overhead
The realities of raw FFI
Drivers, runtimes, and platform quirks
This talk shares practical lessons from building and shipping a real cross-platform inference system in Rust, focusing on the systems engineering challenges behind such portable ML deployment.
Hosted by
Supported by
Platinum sponsor
Silver sponsor
Silver sponsor
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}