Loading…
Linaro Connect San Diego 2019 has ended
Linaro Connect resources will be available here during and after Connect!

Booking Private Meetings
Private meetings are booked through san19.skedda.com and your personal calendar (i.e. Google Calendar). View detailed instructions here.

For Speakers
Please add your presentation to your session by attaching a pdf file to your session (under Manage Session > + Add Presentation). We will export these presentations daily and feature on the connect.linaro.org website here. Videos will be uploaded as we receive them (if the video of your session cannot be published please let us know immediately by emailing connect@linaro.org).

Dave’s Puzzle - linaro.co/san19puzzle

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Validation and CI [clear filter]
Monday, September 23
 

3:00pm

SAN19-112 Intelligent Linux test suite
Every Linux release is a collaboration of various developers, maintainers and sub-system, containing lots of patches and codes and community try their best to ascertain the stability as much as possible.
But considering that the changes can impact various areas/subsystems/use-cases/architectures, it is not very easy and rather impossible to guarantee a stable release. Even ensuring regressions is not a straightforward thing.

Any organization which is considering to up-rev their Linux always has susceptibility to risk. Despite the best work being done by community of testers, maintainers and developers, how many or how severe bugs will get introduced in the re-based Linux is a question not easy to answer.
This susceptibility/risk can be reduced with good number of test cases; these tests could be specific tests related to organization in terms of the architectures they use and the use-cases they support, and various test cases inherited from open source. And to have very low risk level, there would be a need to run hundreds/thousands of test cases.
Here execution of test cases may take time from hours to days.
Other problem with test cases that they are static and never get evolved with past learning and experiences.

We are proposing a AI based tool which will help to provide a set of test cases (sub set of hundreds/thousands of test cases) which are intelligently picked based
on past learning of driver or sub-system or area. This past learning is created based on result of test cases run in various previous releases. This subset of test cases can be run
to check the stability of Linux and the risk level of an up-rev. This is definitely a huge time saving and at the same time will try to identify the problem areas more efficiently.
This tool would also publish the list of test cases run and their pass/fail result.

Any organization can then look through the test report, check the failed test cases assess the severity of the failures, and decide whether they should go for fix or wait for new release.
This tool can be run on every Linux release to provide stability level. Other than stability, this tool can also tell area/subsystem which are stable or which are very dynamic in nature, helping maintainers focus.

Aim is to place this tool on any open web portal which is easily accessible by the community. Also community can help with more test cases to enhance tool learning and hence good sub-set of test cases.

Speakers
avatar for Poonam Aggrwal

Poonam Aggrwal

Platform Software Architect, NXP Semiconductor Ltd
I am computer Science Engineering graduate with almost 18 years of continuous experience in Embedded systems, Linux BSP, Unix, operating system internals, device drivers, boot loaders, Flash, DDR, Ethernet, SATA, USB, wireless, networking, etc, and open source software. Very good... Read More →
avatar for Prabhakar Kushwaha

Prabhakar Kushwaha

Platform Software Architect, NXP Semiconductor Ltd
I am a computer science and engineering Graduate with ~13 years of continuous experience in Linux/RTOS based Embedded software/firmware in multi-core technologies and having very good exposure of Linux, FreeRTOS, u-boot, device drivers, boot loaders, flash technologies etc. I have... Read More →



Monday September 23, 2019 3:00pm - 3:25pm
Pacific Room (Keynote)
 
Thursday, September 26
 

2:00pm

SAN19-422 Advanced testing in python
Testing a large python application, like LAVA, can be sometime tricky.

The first part of the talk will focus on classical python testing features like pytest and mocking.
The second part of the talk will concentrate on some specific tools that where developed to test LAVA itself (meta-lava, DummySYS, ...). These tools and the corresponding ideas could also be used to test other systems.

Speakers
avatar for Remi Duraffort

Remi Duraffort

Senior Software Engineer, Linaro
I'm a senior software engineer, working for Linaro. I've been contributed to OSS since 2007 when I started working on VLC Media player at university. I worked for 5 years at STMicroelectronics where I ported the v8 JavaScript engine on sh4 processors. I also contributed to many... Read More →



Thursday September 26, 2019 2:00pm - 2:25pm
Sunset IV (Session 2)