Linaro Connect San Diego 2019 has ended
Linaro Connect resources will be available here during and after Connect!

Booking Private Meetings
Private meetings are booked through san19.skedda.com and your personal calendar (i.e. Google Calendar). View detailed instructions here.

For Speakers
Please add your presentation to your session by attaching a pdf file to your session (under Manage Session > + Add Presentation). We will export these presentations daily and feature on the connect.linaro.org website here. Videos will be uploaded as we receive them (if the video of your session cannot be published please let us know immediately by emailing connect@linaro.org).

Dave’s Puzzle - linaro.co/san19puzzle

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

AI/Machine Learning [clear filter]
Tuesday, September 24


SAN19-208 Arm NN - New features in 19.08 release
This presentation will provide details of the new features that have been added to Arm NN in the 19.08 release.

These features include:
- Dynamic Backend Loading
- Android Q operators
- External Profiling support (Phase 1)

avatar for Sadik Armagan

Sadik Armagan

Software Engineer, Arm
Sadik Armagan is a Staff Software Engineer at Arm, where Sadik is a Software Engineer in the Arm NN Software team in Machine Learning group, responsible for developing, maintaining and testing new and existing in Arm NN SDK. The Arm NN SDK is a set of open-source Linux software tools... Read More →

Tuesday September 24, 2019 11:00am - 11:25am
Sunset 3 (Session 3)


SAN19-211 ONNX & ONNX Runtime
Microsoft and a community of partners created ONNX as an open standard for representing machine learning models. Models from many frameworks including TensorFlow, PyTorch, SciKit-Learn, Keras, Chainer, MXNet, and MATLAB can be exported or converted to the standard ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices.

ONNX Runtime is a high-performance inference engine for deploying ONNX models to production. It's optimized for both cloud and edge and works on Linux, Windows, and Mac. Written in C++, it also has C, Python, and C# APIs. ONNX Runtime provides support for all of the ONNX-ML specification and also integrates with accelerators on different hardware such as TensorRT on NVidia GPUs.

The ONNX Runtime is used in high scale Microsoft services such as Bing, Office, and Cognitive Services. Performance gains are dependent on a number of factors but these Microsoft services have seen an average 2x performance gain on CPU. ONNX Runtime is also used as part of Windows ML on hundreds of millions of devices. You can use the runtime with Azure Machine Learning services. By using ONNX Runtime, you can benefit from the extensive production-grade optimizations, testing, and ongoing improvements.

avatar for Weixing Zhang

Weixing Zhang

Senior Software Engineer, Microsoft
Weixing Zhang is a Senior Software Engineer working in AI Framework Architecture team at Microsoft. His focus is optimization of AI framework, code generation and training in ONNX Runtime.

Tuesday September 24, 2019 11:30am - 11:55am
Sunset 3 (Session 3)


SAN19-215 AI Benchmarks and IoT
There are several mobile and server AI benchmarks in use today and some new ones on the horizon. Which of these or others are applicable to IoT use cases? How do you meaningfully compare AI performance across the wide range of IoT HW with widely varying cost, memory, power and thermal constraints, and accuracy tradeoffs for quantized models vs non-quantized models? This talk will discuss these topics and some of the possible ways to address the issues.

avatar for Mark Charlebois

Mark Charlebois

Director Engineering, Qualcomm Technologies Inc
Presently in QCT for Qualcomm Technologies Inc (QTI), working on a Deep Learning framework for Qualcomm SoCs and as an open source software strategist. Mark has represented QTI on the Linux Foundation board, and served on the Dronecode board, and Core Infrastructure Initiative steering... Read More →

Tuesday September 24, 2019 12:00pm - 12:25pm
Sunset 3 (Session 3)


SAN19-218 Inference Engine Deployment on MCUs or Application Processors
This session will describe how to apply Arm NN, CMSIS-NN, and GLOW to translate neural networks to inference engines running on MCUs or Application Processors.

avatar for Markus Levy

Markus Levy

Director of ML Technologies, NXP Semiconductor
Markus Levy joined NXP in 2017 as the Director of AI and Machine Learning Technologies. In this position, he is primarily focused on the strategy and marketing of AI and machine learning capabilities for NXP's microcontroller and i.MX product lines. Previously, Markus was chairman... Read More →

Tuesday September 24, 2019 12:30pm - 12:55pm
Sunset 3 (Session 3)
Wednesday, September 25


SAN19-304 Creating Deep Learning Infrastructure for the ARM-Based Flagship Supercomputer
We will share your experience if creating deep learning ecosystem for Fugaku, exascale supercomputer to be deployed at RIKEN Center of Computational Science, Japan

avatar for Aleksandr Drozd

Aleksandr Drozd

Research Scientist, RIKEN
Dr. Aleksandr Drozd is a Research Scientist at RIKEN Center for Computational Science. His research interests like at the intersection of artificial intelligence and high performance computing.

Wednesday September 25, 2019 11:00am - 11:50am
Sunset 3 (Session 3)


SAN19-311 TVM – An End to End Deep Learning Compiler Stack
AWS is a leading cloud-service provider with the goal of providing the best customer experience. ARM has a unique place in the whole ecosystem – both at server and edge devices. In this talk, I will explain how AWS Sagemaker Neo accelerates deep learning on EC2 ARM A1 instances and ARM-based edge devices to improve customer experience. AWS Sagemaker Neo uses TVM, an open-source end-to-end deep learning compiler stack.

avatar for Animesh Jain

Animesh Jain

Applied Scientist, AWS
Animesh Jain is an Applied Scientist II at Amazon Web Services with a strong research background in computer architecture and compilers. He has a doctorate from University of Michigan, Ann Arbor in Computer Science and Engineering. Animesh has published many research papers in top-tier... Read More →

Wednesday September 25, 2019 12:30pm - 12:55pm
Pacific Room (Keynote)


SAN19-313 Using Python Overlays to Experiment with Neural Networks
Python Productivity for Zynq, or PYNQ, has the ability to present programmable logic circuits as hardware libraries called overlays. These overlays are analogous to software libraries. A software engineer can select the overlay that best matches their application. The overlay can be accessed through an application programming interface (API). Using existing community overlays, this course will examine how to experiment with neural networks using PYNQ on Ultra96.

avatar for Tom Curran

Tom Curran

Sr. Technical Marketing Engineer, Avnet
Tom Curran works on hardware and software for a wide variety of SoC FPGA architecture projects and currently spends most of his time with the Avnet Ultra96 board creating reference designs and training materials for customers as a Sr. Technical Marketing Engineer in the Products... Read More →

Wednesday September 25, 2019 3:00pm - 3:25pm
Sunset V (Session 1)