Optimize for Amazing

OpenVINO™ DevCon is now available for streaming, so there’s still time to learn about all the new features and capabilities in OpenVINO™ 2022.1. Watch all the sessions online.

See what’s new in OpenVINO™ 2022.1

OpenVINO™ 2022.1 delivers new speech capabilities, automatic hardware optimizations, and a revamped API. Get hands-on experience with this major release at OpenVINO™ DevCon. Attendance is free and open to developers of all levels.


Top five upgrades: OpenVINO 2022.1

  • Optimize during or post-training for maximum performance and accuracy
  • Boost NLP/BERT performance with dynamic shapes
  • Automate inference optimization and parallelism
  • Import models from frameworks more easily
  • Automatically detect and balance inference workloads across CPUs, GPUs, and accelerators

Learn more

Register now

OpenVINO™ DevCon
Watch all the OpenVINO DevCon sessions online

Required Fields(*)

Please enter a first name.
First name must be at least 2 characters long.
First name must be less than 250 characters long.
Please enter a first name.
Please enter a last name.
Last name must be at least 2 characters long.
Last name must be less than 250 characters long.
Please enter a last name.
Please enter an email address.
Please enter a valid email address.
Email Address must be less than 250 characters.
Please select a country.
Your registration cannot proceed. The materials on this site are subject to U.S. and other applicable export control laws and are not accessible from all locations.
Please select a profession.
Please enter a business phone.
Business phone must be at least 5 characters long.
Business phone must be less than 250 characters long.
Please enter a business phone.
Please enter a valid business phone.
Please enter a company name.
Company name must be at least 2 characters long.
Company name must be less than 250 characters long.
Please enter a company name.

By submitting this form, you are confirming you are age 18 years or older and you agree to share your personal data with Intel for this business request.

By submitting this form, you are confirming you are age 18 years or older. Intel may contact me for marketing-related communications. To learn about Intel's practices, including how to manage your preferences and settings, you can visit Intel's Privacy and Cookies notices.

On-Demand Sessions

Welcome to OpenVINO DevCon!

Presented by Intel
See what’s new in OpenVINO 2022.1 We explore a bit of OpenVINO history and share what it took to create the new and improved OpenVINO.

OpenVINO 2022.1: New Feature Overview

Presented by Intel
OpenVINO 2022.1 is our biggest update yet, and it’s jam-packed with new features. There’s a brand-new API and new workflows for TensorFlow and PyTorch models that make it easier to get maximum inference performance from Intel CPUs, GPUs, and accelerators.

Walk with us through the newly streamlined OpenVINO installation procedure for Python and C++, then dive into the new capabilities. By the session’s end, you’ll know how to put the new features in OpenVINO 2022.1 to work in your AI inference implementations.

Workshop #1: Natural Language Processing and Dynamic Shapes

Presented by Intel
Natural Language Processing (NLP) and audio processing get major support in the OpenVINO 2022.1. In this demo and workshop, we walk through a sample app from our open-source library of Jupyter Notebooks and show you how to use OpenVINO for chat, text-to-speech, and other NLP scenarios.

Workshop #2: Auto Device Plugin and Performance Hints

Presented by Intel
In previous versions of OpenVINO, targeting and optimizing inference workloads for multiple processor types in a system—CPUs, integrated GPUs, accelerators—required specific plugins and additional application logic to configure each possible device separately.

OpenVINO 2022.1 introduces two new configuration techniques that simplify coding, improve performance, and increase application portability: the Auto Device Plugin and performance hints.

The Auto Device Plugin can automatically determine what compute resources are available for inferencing and balance inference workloads across them. With the Auto Device Plugin, applications no longer need to know their compute environment in advance.

Performance hints reverse the direction of configuration by expressing a target scenario with a single config key and letting the device configure itself in response.

In this demo, we show you how to use the Auto Device Plugin plus THROUGHPUT and LATENCY performance hints to maximize performance on mixed hardware. Then we show you how to use the new benchmark_app to measure hint performance.

AI Challenge

Presented by Intel
Would you like to be a part of our future events and challenges? Join this session with Intel evangelists, product managers, and engineers from the OpenVINO team to learn about our latest projects, opportunities, and upcoming events. You can try out some of our coolest real-time demos. We tell you how developers are participating in the Google Summer of Code, preview our presentations for Computer Vision Pattern Recognition (CVPR), and show you two OpenVINO developer activities you can participate in: the Kaggle Dev Team Adventures and the Intel® 30-Day AI Dev Challenge.

Follow the OpenVINO™ evangelists in our Kaggle Dev Team Adventures Series and practice with hands-on labs. And if you’re ready to upskill with the opportunity to win prizes, participate in theIntel® 30-Day AI Dev Challenge.

How to maximize your CPU compute power for CNN inference

Presented by Deci
CPUs are everywhere, and they can serve as more cost-effective options than GPUs for running AI-based solution. However, with the increasing size of over-parameterized DNNs, finding models that can run efficiently on CPUs and deliver accurate results can be a challenge.

Deci.ai has generated a new set of industry-leading image classification models, dubbed DeciNets, using Intel’s OpenVINO and Deci’s AutoNAC (Auto Neural Architecture Construction) technology. DeciNets cut the gap between a model’s inference performance on a GPU versus a CPU in half, without sacrificing the model’s accuracy.

In this talk you will learn how to use multiple tools, including OpenVINO and AutoNAC, to improve deep learning model performance and fully utilize CPU compute power.

Deploying Spatial Intelligence Across Thousands of Locations with OpenVINO

Presented by Pathr.ai
In the age of AI, retailers and other industries with physical footprints are turning to spatial intelligence to understand how people utilize space and to apply spatial insights to their business objectives.

In this session, Pathr.ai Founder/CEO George Shaw will discuss spatial intelligence and detail how the OpenVINO-powered turnkey Pathr.ai solution is helping companies achieve financial gains.

Shaw unpacks how Pathr.ai developers are using new features in OpenVINO 2022.1 like the AUTO plugin which, automatically load balances and implements dynamic inference parallelization across CPU, GPU and VPU targets. With the AUTO plugin, Pathr.ai can deploy more efficiently and scale AI across thousands of locations at the edge.

Optimize Speech Recognition with OpenVINO on Red Hat OpenShift Data Science

Presented by Red Hat
Data science and machine learning are helping create insights, drive business decisions, and generate income across a spectrum of industries. However, developing and deploying ML workflows can be challenging, especially when it comes to performance.

Learn how we are addressing these challenges with Red Hat OpenShift Data Science and OpenVINO. Red Hat OpenShift Data Science is a hosted platform for data science that makes it easier to take advantage of OpenVINO's performance optimizations.

In this talk you will learn how to optimize an Automatic Speech Recognition (ASR) model using OpenVINO developer tools and deploy it in OpenShift using OpenVINO Model Server.

Real-time video analytics with Irisity for transportation

Presented by Irisity
This demo will illustrate how OpenVINO, Irisity and Dell Technologies’ server platforms deliver complex real-time video analytics from multiple camera inputs. We will also explore how to link additional workflows—such as video management and access control—to create a highly scalable “ingest once, allow insights from many” infrastructure.

Powering machine learning in Game Development with Intel OpenVINO

Presented by Procedural Worlds
Until now, machine learning in game development was reserved only for the biggest AAA studios. The Intel Game Dev AI Toolkit (powered by Intel OpenVINO) brings Intel AI to game developers at every level. Join this session to see the Game Dev AI Toolkit in action and learn how Unity tools like Procedural World's Gaia can be super-charged with machine learning.

Winning in the Marketplace

Presented by Intel
Independent software vendors (ISVs) in edge AI are winning with Intel! We show you the benefits of joining the Intel® Partner Alliance and becoming part of the Intel ecosystem, including matching-making and go-to-market support.

AMA: Ask us anything about OpenVINO tech and roadmap

Presented by Intel
Listen as OpenVINO experts, engineers, and evangelists talk about OpenVINO, the features in 2022.1, and where we’re investing in future product directions.

Closing Remarks

Presented by Intel
OpenVINO DevCon closing remarks recap the latest features you can take advantage of to accelerate your deep learning models with Intel® architecture.

Do something wonderful.