🤖 Day 6: Explore and share insights on AI testing tools

We’ve now reached Day 6 of our 30 Days of AI in Testing challenge! Yesterday, we explored real-world examples of AI in action. Today, let’s zone in on specific AI-assisted testing tools that cater to a specific need within your testing processes.

1. Select a Testing Need: Choose one application of AI in testing that meets a testing need you’re interested in (e.g. test case generation, test data management, etc).

Tip: check out the responses from the Day 3 challenge for ideas on AI uses or perhaps focus on the AI application you discovered yesterday.

2. Research and Analyse AI Testing Tools: Next, research three or more AI testing tools that use AI to address your identified testing need. Create a list of several tools, make pertinent notes and compare them on requirements and features that matter to you.

Tip: @shwetaneelsharma’s talk on her approach to comparing tools may help you with your analysis.

3. Share Your Findings: Finally, share the information about the tools you’ve discovered by posting a reply to this topic. Consider sharing:

  • Brief overview of each tool
  • Key capabilities
  • Your perspective on their potential impact on efficiency or testing processes
  • Which tool interests you most and why

Why Take Part

  • Enhance Your Toolkit: By exploring AI-assisted tools, you’re identifying potential resources to help make your testing smarter and more efficient.
  • Community Wisdom: Sharing and discussing these tools with the community allows us to learn from each other’s research and experiences, broadening our collective understanding of AI in testing.

:rocket: Level up your learning experience. Go Pro!

8 Likes

Hello again :slight_smile:

Today’s task needs a little more of work :sweat_smile: , but let’s do it :mechanical_arm:.

I analyzed three tools, opening and testing myself:

  1. Testim
    A low code tool that supports UI Web and Mobile testing, you can record your tests directly into browser and there is also visual testing.
    I didn’t like the tool, I tried to record a test with a demo site, it failed, and it didn’t explain to me why, just a generic error. No user-friendly interface. The AI integration here is shown on self-healing and visual testing, so I could not verify the advantages in one test only.

  2. Mabl - A low code tool that supports UI, API and performance testing, you can also record your UI tests using a desktop app, the API testing part feels like postman, the tutorial for demo is beautiful. I am not a fan of low code tools, but this one feels very complete, for a company to escalate automation test with people that are not used to coding. The AI integration of Mabl appears on self-healing, suggest coverage, so we need to use more to see the advantages.

  3. Postman - A tool to test API and API performance, it is very easy to set a request and the tests can be inputted on the Test tab, it is not that easy to understand how to run a flow of tests if you need to integrate more than one request, but testing each request is very easy.
    This was the tool that I could test the AI integration more easily, one button and the basic tests are created, one description of a test and it is created, example:
    Added tests to check if all items have a not null array on from object and this array have at least one value with an object with email not null
    Test:
    pm.test("All items have an object with email not null in the from array", function () { const responseData = pm.response.json(); responseData.items.forEach(function(item) { item.from.forEach(function(fromItem) { pm.expect(fromItem.email).to.be.a('string').and.to.not.be.empty; }); }); });

And for the decision of which test tool I would use,

  1. For UI testing I would choose Mabl for sure, specially if I have more people in the QA team without code skills.
  2. But if I only want to test API Postman would be my first choice for sure, PostBot is pure magic and if we want to put everything in code after setting all the tests, we can use newman and get the help of the devs to maintain tests :sweat_smile: .

That’s it for today. Looking forward to see the other answers.

20 Likes

Some benefits of using AI that I could see that would be helpful for testing are test authoring (i.e. being able to write tests in Plain English or by recording tests from the screen) and self-healing (i.e. automatically adapting the code base to UI changes).
Relicx takes this a step further by adding a copilot feature, which makes use of an AI assistant to write tests from suggestions written in Plain English.
Virtuoso’s live authoring feature is also helpful for easier debugging when writing test cases as it shows the interactions done by the tests in real-time.

8 Likes

Morning fellow testers, I’m going to cop out a bit on todays challenge as my post for yesterdays challenge was very similar to what’s required today so I’m going to post the link to that here: 🤖 Day 5: Identify a case study on AI in testing and share your findings - #3 by adrianjr.

In summary though the testing need I would benefit most from is the self-healing tests and in my experience so far of Ranorex and Katalon Studio neither were ready to meet my needs there.

9 Likes

Same thoughts. I get the sense that this topic was covered in previous challenges already.

3 Likes

Maybe this is a good opportunity for me to talk a little bit about why I feel so ambivalent about testing tools.

In my world, the tester is a fully integrated member of the development team, and shares both testing and other responsibilities with the rest of the dev team. This means while testing specialist knowledge is provided by the tester, the work of writing and maintaining tests is shared by the team. To make this happen test tools need to fit seamlessly into the delivery pipeline, and any test code should to be stored with the application code so it can go through the same code review process as the rest of the code base. This is hard and sometimes impossible with external tools.
And that, in a nutshell, is why I’d rather see testers learn to write code than testers learn tools. Sometimes those tools can be helpful to bridge that gap (like postman), but if you must learn a tool, learn a tool that helps you write (better) code.

Now that that’s out of my system I think I’ll continue to focus on learning more about how AI can help the learning. One aspect I haven’t seen mentioned yet that I am very interested in is if there’s any AI integration for delivery pipelines - if anybody has pointers, I’d be grateful to receive them!

And an update on yesterday’s mission of debugging some seriously garbled code from the codecademy study. I used my personal chatGPT account to finish the debugging, and the GPT made it much faster and easier than it would have been on my own. And as a bonus, the PythonGPT I used didn’t just spit out corrected code, it offered a lot of reasoning and explanations on what could be wrong and how it arrived at this conclusion.
So that’s something I’m very likely to continue using.

20 Likes

One place where I see AI (or even not-AI automation, really) could help me is smarter tests selection for automation run. In my current project, we run automation suite for each PR. Full regression run takes about 30-35 minutes, which is not too bad, but overall wasteful - many of these tests do not bring any new information, when you consider the context of a change made. So a tool that could look into PR and decide which tests should be run (and if any at all!) would help.

I didn’t run exhaustive Internet search - just a single query in Google. What I found:

  • Launchable has “Predictive Test Selection” - marketing page documentation.

    You need to register your project with Launchable first. In CI pipeline you can use their CLI tool to request a list of tests to run. You submit a list of all tests, id of the build and what test runner you are using. In response you will get back a file with selected tests, that you can pass to your test runner.

    (I have checked documentation for pytest, because that’s what I use. The command they give in documentation is likely to fail if your tests are parametrized, especially if your parameters have spaces. Common mistake.)

    They have integrations for many test runners across most popular languages - Java, Cypress, Jest, dotnet, pytest, RSpec, even something for perl.

    To use predictive test selection you need at least “Moon” plan, which is second out of four. They do not disclose price on their website. “Earth” plan, which is first, is $250 per month per “test suite” (probably approximately project, but I assume single project might have multiple “test suites” in some cases), so “Moon” is probably more expensive than that. They offer 4 weeks trial which I did not engage in.

  • Appsurify offers “TestBrain” and positions themselves as “AI Risk Based Testing Tool”. So while smarter test selection is a module for Launchable, it is the core of offering for Appsurify.

    Documentation is shorter and lack many details. In a documentation I can access from main page there are links to documentation on “Gitbook”, which requires me to create an account and then says I’m not authorized. Anyway, they claim that I will need to add a script that will push results to them, and model will be fully trained after they have results from 50 runs (which should take 2-3 weeks - sounds roughly alright for my current team, but I’ve been in teams where 50 runs would be achieved in a day or two). At this point I should change my integration to run selected tests instead of all.

    They claim to support multiple code repositories, CI/CD systems, test frameworks. List of test frameworks roughly matches list from Launchable, I think Appsurify has more items.

    They have two pricing plans, but there is no price disclosed for any of them. “Professional” plan (presumably more expensive) can run on-premise, which I appreciate.

    Unfortunately, a link to documentation that is not publicly available and link to “Blog” that results in error page do not give me confidence that this company still exists. If they do, I guess one large client could basically support their entire business. But if I were a small client, I would worry if they are still going to be around in few years and if I should commit to them.

  • That’s it. Google did not give me more tools.

    I found a blog post from Facebook discussing “predictive test selection”. Obviously, the tool is not publicly available. I only skimmed the blog post, but they seem to cover the high-level overview of the system, without going into details.

    I also found a reference to Microsoft “Evo”, which is supposedly a smart test selection tool developed by Microsoft. I was not able to find any more mentions of that thing, so even if it exists, it does not seem to be available to anyone else.

14 Likes

Thanks for sharing, they are great tools, although I haven’t used them yet, I hope to have the opportunity to try them!

4 Likes

Thanks to everyone who has taken part so far in today’s task and, indeed, to those taking part daily! :clap: I appreciate you all sharing your thoughts and experiences; it’s what’s making this challenge so amazing so far. I totally get why folks are feeling there’s a similarity between this task and previous ones. They are linked as we’re aiming to build our collective knowledge incrementally. But it makes sense if you’re further along in your AI journey that you would have completed some of these tasks, but each day will be different, and you don’t have to complete each one. That’s another great thing about the challenge: you can dip in and join where it’s valuable to you or where you can bring value to others.

The difference with today’s task is we’re focussing on digging a bit deeper into some specific tools themselves. The ask is to pick one particular testing need you have. Then research and compare multiple AI-assisted tools that could help with that specific need. It’s about reviewing and listing the different features, capabilities, and considering the potential impact each tool could have for your context. Getting into the nitty-gritty details to understand which one may be the best fit and what tool requirements are particularly important to you and your team.

It’s been a lot of content so far, but this iterative process is helping us layer on knowledge day-by-day. Even if concepts seem adjacent, each task lets us view it through a new lens.

I hope that shines a bit of light on how Day 6 is its own thing. I’m looking forward to learning what tools folks find, how people assess them, and what matters in different contexts

11 Likes

I really appreciate everyone taking the time to describe the different options available. It is very valuable to read the comments and what has emerged.
It gives me a greater chance to look at tools and see what we can do in the company to take the next step in AI development and how we can use this in the testing area.

So a big big thank you to everyone who writes summaries and comments.
:slight_smile:

3 Likes

I decided to try coTestPilot to test our website, I’ve been meaning to do it all year. So I added the extension… and I cannot figure out how to use it at all. The user guide is pretty scant and doesn’t say how to actually make it do something. Anyone else here know?

5 Likes

Hello, @testingchef and Fellow Participants,

Today’s task and the activity that I did for it left me with some cool possibilities.

Here are the three aspects for which I wanted to use AI to assist me:

  1. Test Ideas
  2. Learning
  3. Scripting (Coding)

Good News: I found tools for all these activities. They are also FREE to use as a single user. :orange_heart:

Here is a quick detail on the tools that I found for these purposes along with their short overview:

1. Test Craft TestCraft (google.com)

  • Testing Assistant
  • Available for FREE
  • Generate Test Ideas
  • Generate Accessibility Checks & Evaluate Status on Accessibility Standards / Levels
  • Generate Test Script Code
    • Languages
      • JS
      • TS
    • Supported Test Tools
      • Cypress
      • Playwright
    • Design Pattern
      • POM
      • Without POM

2. Unlimited YouTube Summarizer Unlimited Summary Generator for YouTube™ (google.com)

  • Power Learning Tool
  • Available for FREE
  • Saves time
  • Fast Speed Summary
  • Easy to use

3. Codeium Codeium · Free AI Code Completion & Chat

  • Scripting Assitant
  • Available for FREE for single-user
  • Code Autocompletion Tool
  • Live Code Writer
  • Multiple Trained Codebase Model

I tried out all of these and it was a good experience. Here is my summary mindmap:

Here is a quick video with short demos of these tools: Powerful AI Tools for Testers (FREE) | Day 6 of 30 Days of AI in Testing | Rahul’s Testing Titbits - YouTube

Do share your feedback. Happy AI Testing :fire::fire:

Thanks,
Rahul Parwal

17 Likes

Now I’m trying out Testing Taxi, and though it’s making more sense than testCoPilot, I’m kind of stuck. Anyone want to pair with me to try this out?

5 Likes

Day 6

Select a Testing Need: Choose one application of AI in testing that meets a testing need you’re interested in (e.g. test case generation, test data management, etc).

  • Analysing test automation results to provide information on how effective your tests are, which do/don’t fail often, take a long time to run, cover areas that actually change. Deep analysis if a human was do to it, taking a long time.

Research and Analyse AI Testing Tools: Next, research three or more AI testing tools that use AI to address your identified testing need. Create a list of several tools, make pertinent notes and compare them on requirements and features that matter to you.

  • Report Portal - https://reportportal.io/
    • Can provide unified test reporting for different levels of tests across systems
    • Can be integrated with CI/CD pipelines
    • Auto analysis to try and pinpoint whether its a system bug, problem with automation, test data environments etc.
    • Can detect recurring or unique errors and classify them for you.
    • Logs, screenshots, video recordings and network traffic in test reports.
    • Has a demo project that you can go and explore.
  • Applitools Test Insights - Analyze Test Results | Applitools
    • Single dashboard as part of Applitools Eyes.
    • Used to manage your tests within that product.
    • Wouldn’t be able to integrate results of all tests for my system (unit, integration, API) and see trends there.
    • Does integrate with CI/CD via an API.
    • Its not really clear how AI assists though, unless I missed it.
  • Webomates - Smart Insights, Swift Releases: Harnessing AI in Test Automation Reporting – Webomates
    • Interesting named webomates talks about AiHealing, which uses test execution data and build release notes to detect changes.
    • Where it uses your test automation reports to do one of the following:
      • Heals automated tests - locators, timeout changes.
      • Feature modified - picks out the tests that needs to change and regenerates them
      • New tests needed - can be reviewed and added
    • I found this tool slightly confusing, the wording of test, case, script.
    • Does integrate with CI/CD via an API.

Report Portal went into way more detail and had a test instance to poke around in, I am favouring that tool initially.

9 Likes

Please do explore report portal and share your thoughts - I thought it sounded really useful when I heard it mentioned in the AMA from earlier in the week!

1 Like

In the blogpost AI Driven Test Development Cases Studies (URL: AI Driven Test Development Cases Studies) I saw the following statement (a testing question)

“If I’ve made a change in this piece of code, what’s the minimum number of tests I should be able to run in order to figure out whether or not this change is good or bad?”

This is my main topic of interest for AI assisted testing is to determine which tests are the most important and/or needed when some code modifications on the application are introduced. If it is new code (i.e. new feature), I still can ask this question to figure out what the impact is on the rest of the application.

Looking for tools that can provide this feature, I stumbled upon two of them:

While they appear in my search, I can not recognize if it supports my topic of interest. It support other features like Creating Automated Tests with record and playback, Enhanced Defect Detection, AI-Based Test Execution. Self Healing your test

4 Likes

Hello,

Not a tester, but a Transformation Program Director and a researcher with a keen interest in digital QA.

Recently, I came across Webo.AI, a cloud-based AI-powered testing software that promises a wide range of AI-powered features. These include AI-based automated test case generation, test suite execution, and test maintenance with AiHealing, a smart centralized dashboard, and other functionalities.

They offer a 2-month free trial on their website: https://webo.ai/

If anyone has tried this tool, please share your experiences and findings.

3 Likes

I am looking into this now…

1 Like

Hello Everyone,

In today’s fast-paced digital landscape, ensuring the quality and reliability of mobile applications is paramount for success. To achieve this, teams rely on advanced tools and technologies, including AI-powered mobile app testing platforms. In this comparison, we’ll delve into the capabilities of three prominent AI mobile app testing tools: Katalon Studio, Testim, and TestCraft. Each tool offers unique features and benefits to streamline the testing process and enhance the overall quality of mobile applications. Let’s explore how these tools stack up in terms of test generation aspects, efficiency, adaptability, scalability, integration, accuracy, and cost. By understanding the strengths and limitations of each tool, teams can make informed decisions to meet their specific testing needs and deliver high-quality mobile experiences to users. :iphone::sparkles:
Here’s the updated comparison table with more information and descriptions for each rating:

Aspect Katalon Studio :hammer_and_wrench: Testim :rocket: TestCraft :building_construction:
Test Coverage Moderate High High
Efficiency :star::star:
Offers moderate efficiency. Manual scripting may require some time, but AI-powered features enhance efficiency.
:star::star::star::star::star:
Provides exceptional efficiency with AI-driven self-healing tests and automatic maintenance, significantly reducing manual intervention.
:star::star::star::star::star:
Offers highly efficient autonomous test creation, streamlining the testing process and saving time.
Adaptability :star::star::star:
Provides decent adaptability with manual updates for app changes.
:star::star::star::star:
Offers excellent adaptability, adjusting testing strategies based on app changes and user interactions through AI.
:star::star::star::star:
Provides flexible adaptability, adjusting test scenarios and strategies based on app changes with AI-driven approaches.
Scalability :star::star::star::star:
Supports scalable testing from small to enterprise-level with distributed execution.
:star::star::star::star:
Leverages cloud-based infrastructure for scalability across various app sizes.
:star::star::star::star::star:
Easily scales to meet growing testing demands with cloud-based environments and distributed execution.
Complexity Moderate Low Low
Integration :star::star::star::star:
Integrates well with popular CI/CD tools and version control systems.
:star::star::star::star:
Seamlessly integrates with CI/CD pipelines, bug tracking systems, and collaboration platforms.
:star::star::star::star:
Integrates with CI/CD pipelines, issue tracking tools, and test management systems.
Accuracy :star::star::star::star:
Provides accurate test generation through AI-powered features.
:star::star::star::star::star:
Ensures highly accurate test results with AI-driven self-healing tests and automatic maintenance.
:star::star::star::star::star:
Ensures accuracy in test creation through autonomous AI-driven approaches.
Cost :star::star::star::star:
Free version available with paid options offering additional features.
:star::star::star::star:
Offers flexible pricing plans tailored to the needs of individual teams or organisations.
:star::star::star::star:
Provides pricing options suitable for businesses of all sizes with customisable plans.

Personal Opinion:

Among the three tools, I find Testim to be the most favourable due to its exceptional efficiency, adaptability, and accuracy. Its strong emphasis on AI-driven self-healing tests and automatic maintenance significantly reduces manual intervention, making it an excellent choice for teams prioritising efficiency and quality in their mobile app testing processes.

  • Katalon Studio - Offers moderate efficiency and adaptability with decent scalability and integration capabilities. Provides accurate test results and a range of features, with a reasonable cost, suitable for teams with budget constraints.
  • Testim - Provides exceptional efficiency, adaptability, and accuracy with seamless integration and flexible pricing plans. Offers highly accurate test results, making it a compelling choice for teams prioritising automation and quality.
  • TestCraft - Offers efficient and accurate test generation with good adaptability and integration capabilities. Provides reasonable pricing options, making it a competitive choice for teams seeking a balance between features and affordability.

This comparison provides detailed insights into how each tool handles test generation aspects, aiding in decision-making for selecting the most suitable tool for mobile app testing needs.

Thank you

7 Likes

Thank you for the effort with the insightful comparison.

1 Like