Linaro Connect San Diego 2019 has ended
Linaro Connect resources will be available here during and after Connect!

Booking Private Meetings
Private meetings are booked through san19.skedda.com and your personal calendar (i.e. Google Calendar). View detailed instructions here.

For Speakers
Please add your presentation to your session by attaching a pdf file to your session (under Manage Session > + Add Presentation). We will export these presentations daily and feature on the connect.linaro.org website here. Videos will be uploaded as we receive them (if the video of your session cannot be published please let us know immediately by emailing connect@linaro.org).

Dave’s Puzzle - linaro.co/san19puzzle

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Validation and CI [clear filter]
Monday, September 23

2:00pm PDT

SAN19-103 Linux Kernel Functional Testing (LKFT) 2.0
The LKFT project exists to serve kernel developers and the kernel development community.

In the roughly two years that LKFT has existed, we have run over 60 million tests and sent hundreds of bug reports and patches upstream.

Based on what has worked well for us, and what hasn't, we are in the middle of a large refactor of our build and boot processes. The result will allow us to build and test more kernels, more quickly, with much more variety and granularity.

Looking forward, there will be a focus on custom reporting and analytics that will allow us to get the precise data that kernel developers and communities need to make their jobs easier.

avatar for Dan Rue

Dan Rue

Principal Tech Lead, Linaro
Dan delights developers and users by focusing on good tools and great automation. You can usually find him writing documentation, tests, and yaml. So much yaml.

Monday September 23, 2019 2:00pm - 2:25pm PDT
Pacific Room (Keynote)

3:00pm PDT

SAN19-112 Intelligent Linux test suite
Every Linux release is a collaboration of various developers, maintainers and sub-system, containing lots of patches and codes and community try their best to ascertain the stability as much as possible.
But considering that the changes can impact various areas/subsystems/use-cases/architectures, it is not very easy and rather impossible to guarantee a stable release. Even ensuring regressions is not a straightforward thing.

Any organization which is considering to up-rev their Linux always has susceptibility to risk. Despite the best work being done by community of testers, maintainers and developers, how many or how severe bugs will get introduced in the re-based Linux is a question not easy to answer.
This susceptibility/risk can be reduced with good number of test cases; these tests could be specific tests related to organization in terms of the architectures they use and the use-cases they support, and various test cases inherited from open source. And to have very low risk level, there would be a need to run hundreds/thousands of test cases.
Here execution of test cases may take time from hours to days.
Other problem with test cases that they are static and never get evolved with past learning and experiences.

We are proposing a AI based tool which will help to provide a set of test cases (sub set of hundreds/thousands of test cases) which are intelligently picked based
on past learning of driver or sub-system or area. This past learning is created based on result of test cases run in various previous releases. This subset of test cases can be run
to check the stability of Linux and the risk level of an up-rev. This is definitely a huge time saving and at the same time will try to identify the problem areas more efficiently.
This tool would also publish the list of test cases run and their pass/fail result.

Any organization can then look through the test report, check the failed test cases assess the severity of the failures, and decide whether they should go for fix or wait for new release.
This tool can be run on every Linux release to provide stability level. Other than stability, this tool can also tell area/subsystem which are stable or which are very dynamic in nature, helping maintainers focus.

Aim is to place this tool on any open web portal which is easily accessible by the community. Also community can help with more test cases to enhance tool learning and hence good sub-set of test cases.

avatar for Poonam Aggrwal

Poonam Aggrwal

Technical Software manager, NXP Semiconductor Noida
I am computer Science Engineering graduate with almost 18 years of continuous experience in Embedded systems, Linux BSP, Unix, operating system internals, device drivers, boot loaders, Flash, DDR, Ethernet, SATA, USB, wireless, networking, etc, and open source software. Very good... Read More →
avatar for Prabhakar Kushwaha

Prabhakar Kushwaha

Platform Software Architect, NXP Semiconductor Ltd
I am a computer science and engineering Graduate with ~13 years of continuous experience in Linux/RTOS based Embedded software/firmware in multi-core technologies and having very good exposure of Linux, FreeRTOS, u-boot, device drivers, boot loaders, flash technologies etc. I have... Read More →

Monday September 23, 2019 3:00pm - 3:25pm PDT
Pacific Room (Keynote)
Tuesday, September 24

4:00pm PDT

Kernel Validation Office Hours in Developer Rooms (Garden Room)
Open Hours @Developers Rooms is a time slot where the tech lead and the team are scheduled to be in there.

Teams Participating:

Kernel Validation in Royal Room II

Tuesday September 24, 2019 4:00pm - 5:00pm PDT
Developers Rooms

4:00pm PDT

Lab & System Software (LSS) Office Hours in Developer Rooms (Garden Room)
Open Hours @Developers Rooms is a time slot where the tech lead and the team are scheduled to be in there.

Teams Participating:

Lab & System Software (LSS) Office Hours in Developer Rooms (Garden Room)

Tuesday September 24, 2019 4:00pm - 5:00pm PDT
Developers Rooms
Thursday, September 26

2:00pm PDT

SAN19-422 Advanced testing in python
Testing a large python application, like LAVA, can be sometime tricky.

The first part of the talk will focus on classical python testing features like pytest and mocking.
The second part of the talk will concentrate on some specific tools that where developed to test LAVA itself (meta-lava, DummySYS, ...). These tools and the corresponding ideas could also be used to test other systems.

avatar for Remi Duraffort

Remi Duraffort

Senior Software Engineer, Linaro
I'm a senior software engineer, working for Linaro. I've been contributed to OSS since 2007 when I started working on VLC Media player at university. I'm now core developer and maintainer of LAVA, a widely adopted framework to test software (bootloader, kernel, user space) on real... Read More →

Thursday September 26, 2019 2:00pm - 2:25pm PDT
Sunset IV (Session 2)

Filter sessions
Apply filters to sessions.
  • 96Boards
  • AI/Machine Learning
  • Android
  • Automation & CI
  • Autonomous Vehicles
  • Big Data
  • Boot Architecture
  • Data Center
  • Food and Beverage
  • HPC
  • IoT and Embedded
  • IoT Fog/Gateway/Edge Computing
  • Keynote
  • Linux Kernel
  • Multimedia
  • Networking
  • Notices
  • Open Source Development
  • Other
  • Power Management
  • Security
  • Social
  • Tools
  • Validation and CI