05/04/2020

Artificial Intelligence: barrier breaker and irrefutable testing ally

SHARE:

  • Linkedin Logo
  • Twitter Logo
  • Facebook Logo
  • Mail Logo

By Guillermo Skrilec, CEO at QAlified, the Testing and Quality Assurance business unit for Genexus Consulting.

Skrilec delivered a talk aimed at understanding what is happening with Artificial Intelligence in the Testing world, and made it clear that, even though there are concerns among testers in relation to the autonomy AI provides, it must be perceived "as an ally, not as as something that will replace us".

In his presentation on Articial Intelligence and Testing: What is happening? (in Spanish), Skrilec talked about its current situation, proposed examples, and shared his view on the use of neural networks in different testing processes. He talked about automation and autonomy, the first term being more friendly for testers than the second one, the latter of which promises to radically revolutionize the testing universe.

He started by explaining the complexity and broadness of the  term «Artificial Intelligence». Up until now, everyone knows how a system is traditionally developed, but «when we talk about AI, programmers are not going to work by defining rules, but by identifying and processing data to be used when training a supervised learning algorithm which will make inferences in order to get to a result». This way, he makes it clear that artifical intelligence has arrived, is breaking barriers, and achieves new success results in unimaginable places.

He highlighted two topics: on the one hand, testing applications which use Artificial Intelligence, and what the challenges are. On the other hand, he referenced how to used applied AI to carry out testing activities and to understand what new opportunities it offers.

During the first part of the presentation, he outlined three big challenges when testing applications with AI.

The first one consists of understanding why a neural network does what it does. For example, in the field of medicine, there have been very important breakthroughts when applying AI, but for a doctor to rely on these models, he needs to be able to understand the reasoning applied by the methods, both to make sure that they are precise and to be able to justify important decisions such as changing medication for a patient. Within this context, where AI reaches results that even the most experienced specialists can hardly diagnose, validating these solutions is a very big challenge.

The second challenge is in the quality of the data, which, to a great extent, is what defines the result of an application based on AI. We have moved on from evaluating an algorithm’s result into having to evaluate the quality of data used to train and validate a solution of this type. We we are talking about data, there is some degree of bias. To detect it, it is important to evaluate whether or not there is any reality bias (in the data source) and any bias in the chosen samples.

Thirdly, the challenge is in the bugs found in applications based on AI. Bugs will still exist; what’s important is to decide which bugs make it impossible to operate, and, for that purpose, there is a balance between thoroughness and precision. Thoroughness is related to false positives, while precision is related to false negatives.

Once we have identified the most important bugs, we need to understand that, when talking about AI, it is not possible to go in and fix a specific case, because rules are not programmed in. For these instances, it is necessary to go back, review the data being used, adjust the model and test again, which means this is a process based on trial and error.

These challenges in AI generate a major conflict with the traditional testing model, because while the tester’s role was previously focused on validating results, it is now focused on delivering transparency, focusing on inputs and not only on results, and working in a context where the difference between success and failure is in the ability to identify the bugs that really matter.

The second part of the presentation was focused on implementing Artificial Intelligence as a tool when testing applications. For this purpose, Guillermo Skrilec referred to two different cases. The first case had to do with automated tests. Historically, one of the biggest test automation challenges has been to build scripts which allow the detection of elements in the user interface in order to perform actions on them. Nowadays, one of the ways to use AI is to recognize images; therefore, it is possible to think about identifying buttons and links, among other user interface elements in the application. He explained that this case has two advantages: one, it improves the maintenance process fortest scripts so long as the neural network has been trained correctly. The second advantage is the opportunity to think about reusing tests, given that the level of abstraction in those tests is higher.

The second case had to do with autonomous testing, which, as defined by the speaker, «Is something that allows me to execute tests without the need for direction and which can explore the application independently.» Currently, there are some initiatives using bots to execute tests autonomously in different applications by combining different AI techniques.

When thinking about a testing strategy, it is common, on the one hand, to think about test levels (unit, integration, etc.) and, one the other hand, to think about test types. In most cases, a matrix is built, and there is a plan to apply functional testing at one level, performance testing at a different level, etc. «I think we will now have an additional dimension where we will have to consider the level of autonomy for each one of those testing types and levels», said Skrilec. He went on to say that, in order to have autonomy, it is first necessary to implement automation. «We are very familiar with the term «automation» in relation to productivity and efficiency. But when someone says autonomy, we begin to get worried, we start asking what the role of the tester will be. We need to work with AI, it needs to be an ally, we need to play for the same team and not see it as something that will replace us,» he concluded.

Access more information and take a look at the recording of the presentation here.

Written by Guillermo Skrilec, CEO at QAlified, the Testing and Quality Assurance business unit for GeneXus Consulting