Scenario-based Automatic Testing of a Machine Learning Solution with the Human in the Loop
Résumé
More and more applications rely on machine learning, particularly interactive online learning, to make decisions tailored to human needs and situation. Like any program, the behavior of learning programs must be verified and validated. Testing is one way to achieve this.
In this paper, we analyze the challenges of testing machine learning solutions, focusing on programs that learn online and in interaction with human users. Given the issues arising from the presence of the human in the loop, the non-determinism and the dynamics of online learning, we propose a scenario-based approach. We apply it to the test of OCE, a learning program that builds applications in ambient environments with the user in the loop. In addition, two prototype tools are presented to implement test scenarios and automate their execution for the assessment of OCE.