AI-driven E2E automation with code-like flexibility for your most resilient tests https://www.testim.io/ Making software quality easy. Thu, 15 Feb 2024 17:56:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 2024 DevOps Trends For QA https://www.testim.io/blog/2024-devops-trends-for-qa/ Tue, 06 Feb 2024 12:00:19 +0000 https://www.testim.io/?p=7984 I’ve been reading many of the articles about trends and predictions within IT for the new year. I have been particularly interested in those talking about DevOps. I noticed that most of them are Developer focused. This is very typical in DevOps – to leave out testing. QA is still an integral part of the...

The post 2024 DevOps Trends For QA appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
I’ve been reading many of the articles about trends and predictions within IT for the new year. I have been particularly interested in those talking about DevOps. I noticed that most of them are Developer focused. This is very typical in DevOps – to leave out testing. QA is still an integral part of the “Dev” in DevOps, as well as the overall software release cycle. I decided to make my own list of DevOps trends this year, but I want to include how Quality Assurance might be impacted by these trends. Here is my take on the top five I hope to see:

Shift-Left Testing Goes Mainstream

Up until now, there has been a lot (and I mean a LOT) of talk about shifting testing left. Some have attempted it, and fewer are doing it with great results. However, I believe in 2024 we will see testing moving even earlier in the development lifecycle. This includes unit, integration, and performance testing integrated into CI/CD pipelines. There will be more emphasis on it, and advancement in tooling with AI will make it easier. We all imagine catching bugs before they even begin. That’s the promise of shift-left testing. However, it requires commitment to make that happen. It requires a continuous “everything” mindset, which is harder than many organizations admit to. 

QA Impact: Increased automation and earlier bug detection, reducing costs and improving software quality. We will see faster, more efficient workflows because of AI. Early bug detection will lead to lower costs and smoother releases. 

Hyperautomation and AI-powered Testing Enhances QA Capabilities

Last year we witnessed the hype machine at work over Generative AI, Large Language Models, and OpenAI’s ChatGPT. In 2024 we will begin to see the real beginning stages of what it can do. Here are some of the things we are already seeing in the market:

  • Intelligent test generation: Automatically creating comprehensive test cases based on code analysis and user behavior, reducing manual effort and increasing test coverage.
  • Real-time defect detection: Analyzing test results and logs in real-time to identify potential issues before they impact users, enhancing responsiveness and minimizing downtime.
  • Predictive maintenance: Proactively suggesting improvements to test suites and workflows, preventing regressions and ensuring continuous quality improvement.

Tricents Testim and TTM for Jira are great examples of what’s currently in the market. 

There are three challenges I see to implementing this:

  • Initial investment: Implementing hyperautomation and AI-powered testing tools requires initial investment in technology and training. Implementing internal LLM’s are expensive and resource hungry, especially the data storage components. Many (or most) companies are not ready for this.
  • Explainability and trust: Ensuring transparency and understanding of AI-driven decisions is crucial for building trust and acceptance among QA teams and stakeholders. Many companies are pulling back from AI because of various concerns over privacy, security, and accuracy. Early adopters will feel the most pain.
  • Skill gap: Bridging the skills gap in areas like data science and automation might require training programs and talent acquisition strategies. If you thought skilling up QA beyond manual testing to automation was a challenge, this will be a bigger gap to fill at first. However, I believe those who commit to it will see the biggest gains.

Despite these challenges, AI-powered tools will be able to automate repetitive tasks like test data generation and analysis, freeing up QA professionals for more strategic work. Think of AI-powered automation as tireless assistants, freeing up your time for strategic thinking. While it still requires domain knowledge, context, and some tweaking of the output, this is a huge boost to productivity for QA teams. 

QA Impact: AI-enhanced hyperautomation will bring improved test coverage, faster feedback loops, and increased efficiency to QA teams. They will get a major productivity boost. With AI handling the mundane, QA will begin to focus on strategic test design and analyzing data to predict and prevent problems. It allows for QA professionals to upskill into the areas of data science, advanced automation, and advanced analysis. Teams can handle larger projects and complete them faster. This could be the beginning of bringing testing to a whole new level. 

No-Code/Low-Code Testing Tools

Easy-to-use tools will allow developers and non-technical personnel to participate in testing.

There is a stigma about testing and deep, technical coverage being reserved only for the tech-savvy that can write code. Over the last couple of years, No-code/low-code tools have made advancement that shatter this idea. I believe we will see additional features from these products that will empower developers, business analysts, and “non-technical” roles and allow them to contribute equally to the testing process. I think of it as democratizing quality assurance, where everyone has a voice.

While leveling the field is great, there are challenges or considerations when implementing these kinds of solutions. Some argue that it brings with it less flexibility and limits the capabilities when you cannot get into the internal workings of the tools for complex testing situations. Some are concerned about the security and transparency when you can’t do security scans of many of the low-code products. Others feel that it leads to vendor lock-in situations that will lead to a higher cost in switching later on.

The biggest misnomer I see about these products is that it does not get rid of the need for testing expertise. It will not do everything, and companies can’t do away with the QA team with the push of a button. Understanding testing principles, methodologies, and best practices is still crucial for effective test design and execution. Some organizations have paid a heavy price assuming that a tool can replace the expertise and experience of QA professionals. This may be why there is some cultural resistance to adopting these tools. I think we will see this diminish as the tools become more mature, and I see 2024 as a year we can see some serious advancement on this.

QA Impact: Democratization of testing, but focus on maintaining test quality and expertise. Roles will shift towards ensuring test quality and expertise across the IT landscape as the testing pool expands to more people. Tools will make advancements that will cause less resistance to adoption for low-code solutions. 

Infrastructure as Code (IaC) for Testing Environments

It seems that JSON and YAML are not going anywhere anytime soon. We now see IaC everywhere for containerized applications, Docker, and Kubernetes. I believe we will start seeing more usage in QA to manage and provision testing environments quickly and efficiently. Imagine spinning up testing environments as effortlessly as flipping a switch. That’s the power of Infrastructure as Code (IaC). These tools will automate the provisioning and management of testing environments, saving you precious time and resources. Think of it as having a magic wand that conjures up the perfect testing sandbox, whenever you need it.

While IaC offers many advantages for managing testing environments, it also comes with some challenges and disadvantages. Here are a few:

  • Complexity and learning curve – IaC tools require specific syntax and understanding of infrastructure concepts, presenting a learning curve for testers and developers not familiar with it. This complexity may require dedicated specialists, adding overhead to the team. Debugging and troubleshooting IaC issues can be challenging, further contributing to delays and inefficiencies.
  • Limited Capability – While IaC can handle basic infrastructure provisioning, it might not directly address specific testing needs like data seeding, network configuration, or application deployment.  
  • Lack of Standards – The absence of industry-wide standards for IaC practices can lead to inconsistencies and inefficiencies across different teams and projects.

These challenges highlight the importance of careful planning, training, and tooling selection when using IaC for testing environments. I believe these are fairly easy to navigate around for most organizations. 

QA Impact: Reduced setup time and increased consistency in testing environments, freeing up QA resources for other tasks. IaC will allow for dynamic infrastructure, solving some of the environment issues traditionally faced in prior years. It will give QA more time to play in the testing playground, which means more experimenting to detect more defects. It also means more time for innovating over constantly wrangling with infrastructure.

Agile and DevOps Culture Shift

There will be a continued emphasis on collaboration and shared responsibility for Quality across DevOps teams. 2024 could be the year of a collaborative environment where Agile and DevOps principles intertwine. This isn’t just about developers and operations. QA plays a crucial role in making this happen. Agile and DevOps culture instills a sense of collective responsibility for quality, with everyone contributing to robust software delivery. There must be an unwavering commitment to quality in software if DevOps is going to be successful in any organization. 

One way we are seeing DevOps evolve is a renewed emphasis on the Developer Experience (DX). Platform Engineering emerged as a way to relieve some of the cognitive overload that developers are experiencing trying to do everything, everywhere, all at once. A positive DX goes hand-in-hand with improved QA. I believe QA products will also have to adapt so that the Developer experience is considered.  When developers have access to user-friendly tools, streamlined workflows, and clear documentation, they’re more likely to write high-quality code and actively participate in testing. 

Here’s how QA tools can contribute:

  • Seamless integration with developer workflows: QA tools should seamlessly integrate with existing developer tools and IDEs to minimize context switching for developers and make testing as effortless as possible.
  • Intuitive and user-friendly interfaces: QA tools built with intuitive interfaces and clear instructions will reduce the learning curve and encourage developers to embrace testing.
  • Real-time feedback and actionable insights: QA tools that provide real-time feedback on code quality and information about potential defects will empower developers to fix problems early, preventing them from reaching production.

I personally believe there is a strong tendency in people to build siloes, especially for previous generations because this is “the way we have always done it”. That does not mean it was wrong, and some organizations have had raving success with silos. Sometimes these are “soft” silos where there may be an emphasis towards one discipline or a lack of something – usually because they don’t have all the right players on all the teams. Sometimes it’s just that they don’t know what they don’t know. I think we will see more conversations about how to overcome the silo problem as DevOps starts to mature.  

QA Impact: QA professionals will need to adapt to collaborative work environments and be more open to feedback and improvement. QA starts to become an embedded member of cross-functional teams, actively influencing design, development, and deployment decisions. QA tools and techniques will begin to evolve to keep pace with evolving practices, like making the developer experience easier for tester and developer collaboration.

Summary

Obviously, there are a lot more areas of DevOps that will be impacted this year than I have listed. This might include implementing security and performance into the continuous development lifecycle, cloud-native testing, or solving the big data issues with testing. I felt these were the areas we might see the biggest changes in 2024.

These are just my predictions, and the actual impact on QA and testing may vary depending on the specific technologies and practices adopted by individual organizations. I don’t claim to be a prophet, but I have watch the IT landscape shifting for over 30 years now. Take it for what it’s worth to you. 


Have a great 2024, everyone!
Share your predictions & trends to keep an eye on 👁️


References:

The following references were used in my own research. I encourage you to check these out and do even more investigation:

 

The post 2024 DevOps Trends For QA appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
Interruption Testing https://www.testim.io/blog/interruption-testing/ Mon, 08 Jan 2024 13:25:48 +0000 https://www.testim.io/?p=7893 Mobile testing is an essential component of every software testing cycle. Every application needs to function flawlessly across thousands of hardware and OS combinations. To make sure this is feasible, mobile testing needs to be planned and carried out with the highest level of care and precision. One type of mobile app testing is interruption...

The post Interruption Testing appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
Mobile testing is an essential component of every software testing cycle. Every application needs to function flawlessly across thousands of hardware and OS combinations. To make sure this is feasible, mobile testing needs to be planned and carried out with the highest level of care and precision.

One type of mobile app testing is interruption testing, which involves testing how well a mobile application responds to, manages, and recovers from unplanned disruptions, such as incoming calls and notifications.

The Importance of Interruption Testing for Mobile Apps

Mobile devices have become an integral part of our day-to-day activities, with 6.8 billion of people using smartphones and more than half of the global web traffic being attributed to mobile usage. For increased satisfaction, app users expect uninterrupted experiences. Interruptions, such as incoming calls or notifications, can often disrupt a user’s workflow and lead to frustration. Conducting interruption testing helps ensure that mobile apps can seamlessly handle these events while preserving user data, avoiding crashes, and swiftly recovering to a stable state. By proactively testing and addressing potential issues, developers can deliver an enhanced user experience, build trust, and gain a competitive edge in the crowded mobile app market.

It’s also important to note that interruption testing is not the same as recovery testing, because the application is not recovering from a failure, but simply from a (usually external) interruption.

Types of Interruptions

Some of the most common interruptions that can happen while using a mobile app are:

  • Incoming calls
  • Incoming messages
  • Low battery
  • Notifications from another mobile application
  • Network connection loss and restoration
  • Update reminders
  • Clock alarms
  • Device charging/device fully charged notification
  • Accessing an external link

Recovering from the Interruption

There are several ways in which the application can recover from the interruption:

  • Run in the background: The program takes a backseat as the interruption takes control. It regains control when the disruption is over. For example, if you receive a Facetime or phone call while playing a mobile game, the game will wait for the user to finish answering the phone calls before resuming after the call ends.
  • Display alert: This happens when you are using a mobile application and messages appear from the device or other applications. Incoming messages should be displayed by the application in a non-intrusive way, such as an alert or notification banner, giving the user the choice to read, ignore, or reply to the message without having to exit the mobile application completely.
  • Call to action: Sometimes a mobile application interruption calls for the user to provide input or make a decision. Let’s consider an example where a user is alerted while using their phone to play a game that their battery is getting dangerously low. To save battery life, the application asks the user whether they would like to switch to a power-saving mode. The user can then decide to keep playing the game while preserving the device’s battery life by turning on power-saving mode.
  • No impact: An interruption might not significantly affect an application’s functionality or the user’s experience in some specific circumstances. For instance, there is no alert or call-to-action resolution when a device is charging while it is being used for an application. This guarantees that the application will continue to function without requiring the user to take any immediate action or provide any input.

Best Practices for Effective Interruption Testing

  • Replicate real-world usage: Develop test scenarios that accurately reflect the typical usage patterns of your target audience. Consider different mobile devices, operating systems, and relevant third-party apps to mimic real-world scenarios accurately.
  • Test on various network conditions: Mobile apps operate in diverse network environments, ranging from strong wifi connections to weak cellular coverage. Test your apps under different network conditions to ensure they can withstand interruptions and maintain usability.
  • Emulate interruptions: Use tools or frameworks that allow emulating incoming calls, notifications, or system events to trigger interruptions during testing. This approach provides developers with more controlled and precise testing conditions.
  • Incorporate recovery mechanisms: Evaluate how well the app recovers from interruptions and resumes normal operations after the event has ended. A robust recovery process ensures a seamless user experience, with minimal data loss or disruptions.

Final Thoughts

The reliability of mobile applications and a seamless and uninterrupted user experience are both dependent on interruption testing. Through the simulation of real-world scenarios, it guarantees that mobile applications can efficiently manage disruptions, avert data loss and crashes, and ultimately enhance user satisfaction. It should be a key part of any mobile testing plan.

The post Interruption Testing appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
Reflections and Resolutions: Top 5 Lessons Learned in Testing This Year https://www.testim.io/blog/top-five-testing-lessons-learned-this-year/ Wed, 13 Dec 2023 13:46:41 +0000 https://www.testim.io/?p=7850 Here are 5️⃣ lessons I learned or relearned this year in testing: Test design techniques are a great way to find gaps AI in software testing is here to stay Selenium and Playwright are the most popular automation libraries Codeless and low-code tools are valuable The great divide between developers and testers Test design techniques...

The post Reflections and Resolutions: Top 5 Lessons Learned in Testing This Year appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
Here are 5️⃣ lessons I learned or relearned this year in testing:

  1. Test design techniques are a great way to find gaps
  2. AI in software testing is here to stay
  3. Selenium and Playwright are the most popular automation libraries
  4. Codeless and low-code tools are valuable
  5. The great divide between developers and testers

Test design techniques are great for finding gaps

Test design techniques are different from test types, test cases, and test scenarios. A test design technique is a standardized method to develop test cases and gain greater coverage of an application. There are many variants and combinations to achieve better coverage.

Some advantages for test design techniques include:

  • Effective for detecting defects.
  • Independent of defining and executing a test case
  • Elaborates on the test strategy by aligning on what’s needed for test coverage

For testers, test design techniques are crucial to finding gaps in requirements or acceptance criteria. Three examples of test design techniques are Boundary Value Analysis, Equivalence Class Partitioning, and Cause-Effect Table.

  • Boundary Value Analysis – tests the boundary value itself, the value directly below the boundary value, and the value directly above the boundary value.

Let’s say a requirement stipulates 16 is the legal age to drive. A test ensures the application prevents a 15-year-old user from applying, but approves users who are 16 and 17. Why test for 17? The developer can mistakenly code “equal to 16 years old” (= 16) instead of “greater or equal to 16 years old “(>= 16), which would mean only users who are exactly 16 would get approved.

Value below boundary Value boundary Value above boundary
15 16 17
  • Equivalence Class Partitioning – divides the input data into partition classes, so each partition is covered at least one time.

Imagine the acceptance criteria calls for the app to process 1 to 10 orders. Two test cases are developed to trigger orders less than 1 and orders 11 or higher. If a user inputs -7 or 34 orders, for example, an error is generated because both are invalid values. An order of 5, however, is within the range of 1 to 10, so the order processes without error.

Partition 1 (Invalid Values) Partition 2 (Valid Values) Partition 3 (Invalid Values)
0 or Less 1 – 10 11 or Higher
  • Cause-Effect Table – also known as Decision Table Testing, which identifies many possible input and output combinations. The combinations written in a table can help validate scenarios and identify if there are gaps.

 Let’s consider these theoretical requirements: “If a customer is rated Level 1, then the customer is available for 2 free upgrades. If a customer is rated Level 2 or Level 3, then the customer must see a salesperson because they are eligible for 1 free upgrade if they paid their service fees.

Causes Customer is Level 1
Customer is Level 2
Customer is Level 3
Customer paid service fees
Effects Customer receives 1 free upgrade
Customer receives 2 free upgrades
Customer must see salesperson

In the table below, empty slots identify invalid scenarios. Slots with an ‘X’ are valid scenarios and should contain an executable test. The slots with a ‘?’ mean a question should be raised for clarification.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Level 1 Y Y Y Y Y Y Y Y N N N N N N N N
Level 2 Y Y Y Y N N N N Y Y Y Y N N N N
Level 3 Y Y N N Y Y N N Y Y N N Y Y N N
Service Fees Y N Y N Y N Y N Y N Y N Y N Y N
1 Upgrade X X
2 Upgrades X ?
Salesperson ? ? X X X X ? ?

Based on the example, Scenario 1 is invalid because a customer cannot be rated as Level 1, Level 2, and Level 3. Scenarios 2, 3, 4, 5, 6, 9, and 10 are also invalid.

Scenario 11 is valid because the customer is rated as Level 2 and paid their service fees. Scenarios 12, 13, and 14 are also valid scenarios.

Scenario 7 is a valid scenario with a question. It’s valid the customer is rated as Level 1 and paid their service fees, but the “?” is there because the requirement did not mention if a customer rated as Level 1 should see the salesperson.

Scenario 8 has two questions because the customer is rated as Level 1, but did not pay their service fees. Therefore, the tester can ask if the Level 1 customer receives 2 upgrades without paying their service fee. While I personally don’t believe the customer should receive 2 upgrades, we shouldn’t let this opinion cloud the test design.

Scenarios 15 and 16 have question marks because one scenario shows a service fee is paid without being rated as Level 1, Level 2, or Level 3 customer. The other scenario shows an ‘N’ for each slot, which raises the question of whether there are levels outside of the stated three.

AI in software testing is here to stay

Artificial Intelligence (AI) simulates human intelligence in machines, which paves the way for automating of test cases, accelerate testing cycles, preparing test data, analyzing test logs, and many other testing processes. Many organizations are implementing AI to assist with early defect detection, reduce costs, and generate a variety of test types such as regression and API test cases.

With such rapid adoption, it’s important to establish a dialogue with IT and other governing departments. For a deeper dive into AI, check out Revolutionizing Software Testing: The Power of AI In Action.

Originally released in 2004, Selenium is backed by an extensive community. Playwright is backed Microsoft and initially released in January 2020. Both automation libraries are popular in the testing community. Cypress is another contender, but it’s limited because it only supports JavaScript, whereas Selenium and Playwright support multiple languages.

Selenium has the same bindings between each supported language, which means the same commands in Java are also available in Python, for instance. Note that Playwright features more commands and features for JavaScript/TypeScript than other languages.

Both libraries also support multiple operating systems, multiple browsers, and multiple test runners. The below table compares Selenium and Playwright.

Selenium Playwright
Supported Languages Java, Python, C#, JavaScript, Ruby Java, Python, C#, JavaScript, TypeScript
Operating Systems Windows, Mac, Linux Windows, Mac, Linux, Solaris
Browsers Chrome, Safari, Firefox, etc. Chromium, Firefox, WebKit

Codeless and low-code tools are valuable

Codeless and low-code tools are valuable if paired with good test design and testing best practices. Broadly speaking, a codeless tool allows a person without any knowledge of coding to perform a testing function, whereas a low-code tools enables an individual with some coding skills to perform the same functions.

Even professionals who are fluent with code can find value in these tools–and free up time to get back to building their apps. As always, it’s important to understand the features and limitations of these tools.

The great divide between developers and testers

Both developers and testers are crucial to building a product. A developer writes code to create functionality, while a tester determines if the functionality passes or fails per requirements. The development team wants to discover and fix bugs before the software reaches production, so end users won’t run across these defects themselves and switch products or leave negative reviews.

In some organizations, there is friction between developers and testers. Developers blame testers for being a bottleneck, since their release is held up until testing is complete. Testers may cite unrealistic expectations and lack of time allotted for thorough validation. They might also bite back, complaining about how developers taking a long time to deliver code, only to throw it over the wall, half-baked.

There may be some cases where there are flashes of truth on either side of the fence. But heading into the new year, let’s remember we’re all on the same team. How’s this for a resolution? Let’s honor our respective crafts because we’re both invested in a shared goal: delivering a great product.

🧐 What are your top lessons learned in testing this year?
Have a wonderful year ahead! 🪩

The post Reflections and Resolutions: Top 5 Lessons Learned in Testing This Year appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
Mobile App Testing Checklist https://www.testim.io/blog/mobile-app-testing-checklist/ Wed, 06 Dec 2023 13:37:15 +0000 https://www.testim.io/?p=7806 Testing mobile applications is an essential step in the development process that confirms the software runs without a hitch on a variety of platforms and devices. As smartphones become more and more popular and people rely on them for different tasks, it is vital for app developers to provide a smooth user experience. In this article,...

The post Mobile App Testing Checklist appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
Testing mobile applications is an essential step in the development process that confirms the software runs without a hitch on a variety of platforms and devices. As smartphones become more and more popular and people rely on them for different tasks, it is vital for app developers to provide a smooth user experience. In this article, we will cover the most important aspects of mobile app testing that should be covered to increase confidence in the quality of the software.

Whether you are a developer, QA professional, or someone interested in learning about the testing process, this checklist provides valuable insights and best practices to ensure your mobile app is thoroughly tested and ready for deployment.

Functional Testing

The goal of functional testing is to validate the application can perform its intended functions correctly. Some items on this checklist are generic and should be applied to any application. Others are specific to mobile app testing.

Installation and Launch:

  • Verify the app installs correctly on different devices.
  • Ensure the app launches without crashing.

User Interface (UI):

  • Check the overall look and feel of the app.
  • Confirm all UI elements are correctly displayed and aligned.

Navigation:

  • Test the app’s navigation to ensure users can move smoothly between screens.
  • Verify all navigation buttons and gestures work as expected.

Data Input and Validation:

  • Test all input fields for proper validation (e.g., email, phone number).
  • Cover negative and positive scenarios.
  • Apply black-box testing techniques.
  • Verify the expected error messages are displayed for invalid inputs.

Functional Modules:

  • Test each functional module of the app (e.g., registration, login, payment).
  • Verify all features work as intended.

Integrations:

  • Validate integrations with third-party apps (such as login using Google or Facebook) work as expected.

Offline Functionality:

  • Test the app’s functionality while offline mode (where applicable).
  • Confirm that users receive appropriate feedback about their offline status.

Interruption Testing

Interruption testing is a type of testing strictly related to mobile apps. It tests how the application reacts when dealing with interruptions from incoming notifications and how to recover from those interruptions. This checklist is not exhaustive, but here are common interruptions to check when testing a mobile app:

  • Test how the application behaves when there is an incoming call.
  • Test how the application behaves when there is an incoming message.
  • Validate the application reacts correctly when the phone receives a notification about low battery or battery charging.
  • Confirm the app behaves correctly when there is a network switch or the network is disconnected/reconnected.
  • Check how the app behaves when an alarm starts.

When testing interruptions, it’s important to check not just the interruption itself, but how the application recovers after the interruption.

Compatibility Testing

This type of testing is important for any type of application, including web or desktop, but it’s especially crucial for mobile testing. While most people are using an Apple or Samsung Galaxy at the time of writing this article, there are still a lot of manufacturers, models, and operating versions used by many. Through compatibility testing, you want to make sure your AUT works as expected on all supported devices and OSs. Ideally, this type of testing should be automated, so the tests are repeated on various configurations. You can choose between testing on real devices or using emulators and simulators.

The main checks for compatibility testing are:

  • Test the app on multiple supported devices and screen sizes.
  • Ensure compatibility with different operating system versions.
  • Test backward compatibility with previous app versions.

Usability Testing

The tests in this sub-category refer to how easy it is for the users to work with the application, take advantage of core functionality, and understand how it works. The checklist is divided into three categories: user experience, which relates to how user-friendly the app is; accessibility, which focuses on how well users with disabilities can work with the app; and localization, which is needed when we expect the application to adapt to language and geographical region.

User Experience (UX):

  • Evaluate the overall user experience.
  • Ensure the app is intuitive and user-friendly.

Accessibility:

  • Check for accessibility features for users with disabilities.

Localization:

  • Test the app in different languages and regions.
  • Ensure proper translation and consideration of cultural nuances.

Performance Testing

Your users don’t have endless amounts of patience, so how fast an app works can make it or break it. With so many available apps on the market, users might replace a slow app with a faster one, even if the functionality works as expected. It’s important mobile apps don’t consume a lot of memory or drain battery power. Below are the things to consider for performance testing.

Load Time:

  • Check the performance of the application with a large number of users.
  • Test the app’s load time on different network speeds.
  • Optimize for quick loading, especially on slower connections.
  • Verify how the app behaves when uploading or downloading big files (if applicable).

Memory Usage:

  • Check for memory leaks and excessive memory usage during prolonged use.

Battery Consumption:

  • Test the app’s impact on device battery life.

Security Testing

Security testing focuses on identifying vulnerabilities and making sure that sensitive user data is protected. It involves checking for potential security breaches, like unauthorized access, data leaks, and encryption vulnerabilities, to ensure the app meets the highest security standards. Below you can find common recommended tests.

Data Encryption:

  • Test whether sensitive data is encrypted during transmission and storage.

Authentication and Authorization:

  • Test login/logout functionality.
  • Verify that user data is secure and accessible only to authorized users.

Secure Transactions (if applicable):

  • Check financial transactions for security vulnerabilities.

Final Thoughts

Thorough testing for mobile apps, or any type of application, really, is very important, and this checklist aims to cover key tests that should be performed before releasing to production. I would personally consider using a test automation tool to automate the most repetitive tasks, so the testing team can focus on more experienced-based testing, such as exploratory and ad-hoc.  

The post Mobile App Testing Checklist appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
Revolutionizing Software Testing: The Power of AI in Action https://www.testim.io/blog/ai-software-testing-revolution/ Mon, 27 Nov 2023 08:00:44 +0000 https://www.testim.io/?p=7717 What’s all the buzz about testing software with artificial intelligence (AI)? Let’s start with a basic definition–AI is the simulation of human intelligence in machines. It is composed of several computer science learning branches that focus on creating systems programmed to perform tasks that require advanced cognitive functions. Some of those functions include analyzing data,...

The post Revolutionizing Software Testing: The Power of AI in Action appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
What’s all the buzz about testing software with artificial intelligence (AI)? Let’s start with a basic definition–AI is the simulation of human intelligence in machines. It is composed of several computer science learning branches that focus on creating systems programmed to perform tasks that require advanced cognitive functions. Some of those functions include analyzing data, making decisions, recognizing patterns, learning new information, and adapting to new information.

AI has many applications across diverse industries such as entertainment, healthcare, transportation, and finance. By the end of this article, you will read about:

What is software testing with AI?

Software testing with AI is the process of using artificial intelligence to assist QA testers. It can support organizations by increasing test coverage, improving testing efficiency, and enhancing the overall software quality.

AI requires careful planning to integrate with existing testing processes. In addition to careful planning, an organization must consider refining the AI models and algorithms to be effective. The software testing lifecycle (STLC) is not interrupted when using AI and follows the traditional tent poles of testing: analyzing requirements, test planning, creating test cases, setting up test environments, executing test cases, and test closure.

  1. Requirement analysis phase – Natural Language Processing (NLP) is a branch of AI that enables machines to understand and interpret human language, which means NLP can help analyze test requirements.
  2. Test planning phase – AI has the ability to gather data from defect reports, test execution logs, past project performance, and other test artifacts then establish patterns. AI can marshal this test info to identify high-risk areas and software testing challenges.
  3. Test case creation phase – AI can automatically create test cases based on the project’s requirements. It analyzes the requirements then lists the potential positive test cases and negative test cases. Also, AI can generate boundary conditions to increase test coverage.
  4. Test environment setup phase – AI automates the setup and configuration of databases, test servers, etc. while reducing the setup time. It can also manage test environments.
  5. Test case execution phase – AI optimizes the test case execution order based on factors such as risk assessment, dependencies, and business impact. The test case order provides early detection of high-priority defects and ensuring critical functionalities are tested first.
  6. Test closure – AI assists with the final testing phase by supplying stakeholders with test results. It assists with defect analysis, generates test summaries, compiles test summaries, and formats test results.

How AI helps with software testing tasks

AI helps with testing software by quickly performing each task more efficiently, effectively, and accurately. However, it’s important for an organization to balance tasks between AI alongside their existing functional testing team. Here is a list of some AI software testing tasks.

  • Test data generation – AI tools can generate various test data sets to cover common scenarios and edge cases.
  • Self-healing – AI enables self-healing tests by automatically adapting to changes in the application’s User Interface (UI).
  • Regression test selection – AI automatically selects relevant test cases based on changes to a developer’s code.
  • Execute similar test workflows – AI can learn a test workflow then automatically execute similar workflows.
  • Automatic wait – AI can automatically wait for a page to completely load before performing the next step.
  • Analyze test logs – AI can analyze logs and error messages to identify possible problems and patterns.
  • Load/performance testing – AI can simulate real world user loads and behaviors to diagnose performance issues.
  • Continuous test monitoring – AI can continuously monitor test applications and detect anomalies.
  • Testing exploratory support – AI can guide testing efforts by providing suggestions and recommendations during an exploratory testing session.
  • Predictive test analytics – AI uses historical data and test results to forecast future outcomes about potential release risks.

When an organization incorporates AI into their software testing tasks, they achieve faster testing cycles and higher test coverage.

Benefits of using AI for software testing

AI finishes any task supplied to it by a person and upon completion, it learns about the task. Over time, AI improves by learning how to perform the same task better. Here are some pros of using AI for testing:

  • Cost reduction – AI can lead to cost savings by reducing manual testing efforts, improving resource allocation, and accelerating delivery time to market.
  • Consistency and repeatability – AI safeguards test cases by executing them consistently, which leads to more reliable test results.
  • Early defect detection – AI can identify defects early allowing for quicker resolution.
  • Swift feedback – AI automates parallel test execution across multiple devices and environments, setting the table for a shorter feedback cycle from stakeholders.
  • High quality test cases – AI can generate a variety of test types such as unit test cases, regression test cases, and API test cases..

Drawbacks of using AI for software testing

There are some caveats with using AI in the software testing industry, starting with a reminder that, as efficient as AI may be, it’s not a replacement for manual or automated testers. Instead, it’s meant to be an extend the capabilities of both types of testers, boosting testing processes and procedures. Here are a few reasons why an organization may want to be cautious about incorporating AI:

  • Prone to bias – AI models can inherit biases based on the training data and potentially distribute inaccurate outcomes.
  • Lack of data / difficult to train – AI needs an adequate amount of data for training and validation to be effective.
  • High initial cost – Organizations with a limited budget may find AI costly, since training it requires time investment.
  • Privacy concerns – AI may need to access sensitive user data, so organizations must consider ethical implications and privacy regulations.
  • Maintenance challenges – AI models require ongoing retraining and maintenance.
  • Loss of human touch – human testers can use their experience and intuition to discover defects that may not be obvious to AI.

Conclusion

AI in software testing refers to the use of AI tools to enhance the software testing process and solutions such as Tricentis Testim and Tosca leverage artificial intelligence to improve software quality and accelerate release cycles.

It’s important to align AI-powered testing to your project’s goal, keeping in mind the pros and cons we’ve explored, but when used right, the improvements in the STLC can be night-and-day for your team. These are early days for this field, and as the industry push the limits and capabilities of AI, this transformative technology will only continue to evolve and improve.

The post Revolutionizing Software Testing: The Power of AI in Action appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
Developer Tools for Testers https://www.testim.io/blog/developer-tools-for-testers/ Wed, 22 Nov 2023 13:38:31 +0000 https://www.testim.io/?p=7690 As a tester, chances are you work with a web app (mobile apps have gained a lot of ground lately, too). To make testing more efficient, it’s good practice to use what you have in your toolkit, and one of the most readily available (and free!) tools for testers are the Developer Tools (or DevTools,...

The post Developer Tools for Testers appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
As a tester, chances are you work with a web app (mobile apps have gained a lot of ground lately, too). To make testing more efficient, it’s good practice to use what you have in your toolkit, and one of the most readily available (and free!) tools for testers are the Developer Tools (or DevTools, for short) found in all browsers.

In this post, I will walk you through the most common ways Developer Tools can help in your daily testing duties. The screenshots and details will be on Google Chrome, but the same features apply to most browsers.  

How to Open the Developer Tools

Let’s start at the beginning. To use DevTools, you need to open them first (duh!). This can be done in multiple ways. I prefer the classic keyboard shortcut, which on Chrome and Windows is the F12 key. For Mac users, the keyboard shortcut is Option + Command + I (Ctr + Shift + I also works on Windows). The same combinations work for Edge and Firefox browsers.

Developer Tools have multiple tabs available, where you can work with different types of data from your website:

Open Developer Tools

We will cover the most important ones for testing in a second.

Inspecting Elements

If you work with UI web automation tools such as Selenium WebDriver or Cypress, you’re likely familiar with the Elements tab of the DevTools. Just getting started? Then this is the right time to learn how to use it. You can also open the Elements tab by right-clicking on an element on the page, and selecting Inspect:

Inspect Element

This will open the Developer Tools in the Elements tab and focus the selected element code. From here, you can see the web element information you need, such as classes, IDs, names, or build the CSS / XPath of the element:

Element Information In Developer Tools

You can also use the search function in the Elements tab to look for elements by string, selector, or XPath. Hovering over the HTML node will highlight the element on the page:

Highlight Element

I find this particularly useful when working with complex XPaths because I can see if the XPath identifies the correct element.

Just a small callout: I would not rely on the automatically created XPath or CSS selectors, but rather build my own based on the information I find in this tab. Then use the search function to see that the right element is identified.

Viewing HTTP Requests and Responses

This is done in the Network tab, which is the DevTools feature I work with most when testing APIs. Here you can see which requests have been sent to the server, how long each request took, and what the response was. I’ll use the Restful-booker web platform to demonstrate. When the page loads, this is what you can see in the Network tab:

Developer Tools Network Tab

The Waterfall view can help you see the order in which the requests were sent and how long it took to receive a response. You can also see the details per request by clicking on the Name:

Request Information Developer Tools

Here, you can see the Headers of the request and the response, the HTTP method used, the status code, the Response, and other useful information. This is helpful when you want to test whether the client sends the correct request or whether the server returns the correct response. It’s also useful if the application you’re testing has performance issues and you want to see which requests take the most time.

Performance Tracing

In addition to request tracking, a relatively new feature of DevTools is the Performance Insight tab, which offers information about the web app’s performance. You can record and play back a scenario, then view details about the events that occur while performing actions inside a web application. For example, you can see which actions take a long time, along with suggestions on how to fix them:

Performance Tracing Developer Tools

While I wouldn’t completely rely on DevTools for performance testing, it is a good first step for identifying potential issues before starting to incorporate more “serious” performance testing in your testing strategy. 

Simulating Slow Network Connections

Speaking of performance, if you need to validate the AUT works as expected, even in not-so-ideal conditions, you can simulate a slow network connection from the Developer Tools without having to interfere with your actual connection. This can be done from the recording option described above from the old Performance tab or the Network tab. You have the option to make your CPU operate slower than normal:

Change Network Settings in Developer Tools

Or simulate a slow 3G connection and even go offline:

Use Developer Tools to simulate working offline

This sort of test reveals interesting information about how the application may behave on slower devices or when the users are working with it while temporarily losing their Internet connection.

Emulating Device Settings

From the settings menu, you can select Devices and see how the application looks and how you can interact with it on different pre-set devices. You can also use your own custom setup:

Developer Tools Emulated Devices

This does not replace testing on a real device or using a simulator or emulator, but if you don’t have access to those, this DevTools feature can give you an idea of how the application looks and feels on a device other than a desktop browser.

Emulating Different Geographical Locations

Last in the DevTools arsenal is Location. You’ll find this under the Settings menu, where you can pre-set list of locations with details such as timezone, latitude, and longitude. When you send this information, some websites or web applications will show different data depending on the user’s location. If you want to test how different users from various locations can work with the application, you can override your settings to test this. For this, you first need to open the Command menu (done with keyboard shortcut Ctr + Shift + P for Windows, or Command + Shift + P for Mac):

Run Command in Developer Tools

Here, type “Show Sensors” and select the available option: 

Show Sensors

This opens a new menu where you can select to override your current location:

Show Sensors Menu

Conclusions

Developer Tools can be a good ally for a tester’s work, and it’s important that we know how we can take advantage of what they have to offer. This blog post only covers some of the main features of DevTools for testers, but if you’re hungry for more, check out full documentation for Chrome, Firefox, Edge, and Safari.

The post Developer Tools for Testers appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
Become a LocatorXpert https://www.testim.io/blog/smart-locators-benefits-examples-webinar/ Tue, 14 Nov 2023 13:00:31 +0000 https://www.testim.io/?p=7714 When your users expect rapid, high-quality releases, quick and reliable quality assurance (QA) is crucial. To meet this growing need, automation through code has grown in popularity, particularly in CI/CD pipelines. But using Xpath can incur heavy time costs and a heavy maintenance burden. What are smart locators and why do you need them? That’s...

The post Become a LocatorXpert appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
When your users expect rapid, high-quality releases, quick and reliable quality assurance (QA) is crucial. To meet this growing need, automation through code has grown in popularity, particularly in CI/CD pipelines. But using Xpath can incur heavy time costs and a heavy maintenance burden.

What are smart locators and why do you need them?

That’s why Testim built smart locators. What are these and what do they do? Testim offers a user-friendly interface to record your tests. Let’s say you click a button. When you record a click action, Testim uses an AI-based algorithm to analyze the DOM (i.e. structure of the web page) from the top all the way down to the clicked element. It collects unique identifiers for each element and assigns confidence levels to each attribute. For example, a “text” attribute might have a higher confidence level than a “style” attribute due to its unique nature.

Locators DOM

There’s a smarter, more automated way to find elements. Unlike other Xpath or CSS selectors, smart locators gather multiple parent elements, each with multiple unique attributes for a single element. This provides a stable way to find elements during testing and handles changes in the DOM gracefully, so your tests won’t break when attributes change in new versions.

Locators in action

During playback, Testim uses smart locators to locate and score the elements, giving a clear view into which element was selected along the way using element ID injected by Testim into the DOM:

ElementId

Furthermore, Testim’s smart locators allow for dynamic testing by letting you parameterize locators and use regex. If your application consistently presents a greeting message such as “welcome username,” for example, you can customize your locators using regular expressions (regex). There are several pre-defined regex options available, including “contains,” “starts with,” “ends with,” and “equals.” Here’s an illustration of the “contains” option:

ParamName

Say you need to select a specific value from a dropdown—no worries! You can define a parameter with a specific value and implement it in the Inner Text field inside the Edit Locator modal:

ParameterAgain

Give smart locators a try and drop us a line

Testim’s smart locators allow you to record tests with a user-friendly interface and tap into AI to find elements quickly and stabilizes your tests, so you can minimize the time you spend on maintenance and author new tests instead. Take them for a spin and let us know what you think! Happy testing.


📅 Save the date & join our webinar to become a LocatorXpert 💫

Locators Webinar - Invitation

 

The post Become a LocatorXpert appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
How we built Machine Learning Locators for Testim Mobile https://www.testim.io/blog/discover-new-machine-learning-mode-for-testim-mobile/ Thu, 02 Nov 2023 12:40:11 +0000 https://www.testim.io/?p=7624 A mobile app evolves over time as its look and feel is regularly updated. A button may move to a different location, colors may be updated, and text may be changed. But that button being moved–is it the same button? This question is crucial when testing a mobile app, as the target elements (buttons, fields,...

The post How we built Machine Learning Locators for Testim Mobile appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
A mobile app evolves over time as its look and feel is regularly updated. A button may move to a different location, colors may be updated, and text may be changed. But that button being moved–is it the same button? This question is crucial when testing a mobile app, as the target elements (buttons, fields, headings, images, etc.) that were recorded will change over time. In legacy testing approaches, you may expect tests like this to fail until they’re rewritten or re-recorded, but modern mobile testing calls for tests to be stable, dynamically accounting for changes without the need to re-record steps.

Using machine learning to determine if the target element is the same

Testim’s new Machine Learning Locators use machine learning models, which have been trained with tens of thousands of examples, to determine if the target element is the same. Whether the element changes or ceases to display on the screen, these new Locators ensure your tests remain stable without the need for any user intervention or editing of the target element.

An insider’s look at the machine learning model 

As with any machine learning solution, the ML model needs to be trained with a large dataset consisting of thousands of examples. Part of this training calls for the machine learning model to analyze a set of features of the original target element captured in the recording, then comparing it against a set of features of the target element during the execution of the test, known as test playback. These set of features are part of the screen’s DOM, also referred to as locators. Examples of locators include an element’s Id, its text, its class, and more. The system adds a variety of pre-calculated features to give additional context to the target element and its location on screen.

Comparing the original recording to its playback takes the text, images, and other proprietary data of specific applications and yields a new set of aggregate data points that indicate the similarities or differences of each feature. This “anonymized comparative” data, which effectively only includes numbers, is fed to the model during its training. This ensures the privacy of our clients and creates a generalized model that can infer the behavior of target elements across virtually any scenario.

The model uses this anonymized data to analyze the entire screen to find the target element on the screen. This is achieved by analyzing each element to measure if they exceed certain thresholds. This means that the ML model can not only decide if an existing element is different, but also call out if an element does not exist on the screen at all (e.g. if someone has accidentally deleted it).

A robust machine learning model that considers a vast number of data points

The training enables the machine learning model to consider a vast amount of data points across the entire screen, including generated calculated contextual data points, in a weighted manner. The ML model develops a deep understanding of the expected or typical behaviors of different types of elements and assigns different weights to its data points accordingly. For example, “list items” typically grow over time, which mean their relative on-screen location should weigh less. Traditional rule-based algorithms would fail here, since it is nearly impossible to write rules that consider such a vast number of attributes in a weighted manner. A machine learning model, however, continues to improve over time as it undergoes more training.

Adding optional manual fine-tuning of thresholds

Testim Mobile also offers the ability to fine tune the algorithm’s threshold manually to taste through the UI as follows:

  •       High threshold – target elements found only when the confidence score is high
  •       Medium threshold – target elements found only when the confidence score is medium  
  •       Low threshold – target elements found, even if the confidence score is low 

Each level has its pros and cons. A low threshold, for example, may yield false positives. Conversely, a high threshold high level may discount your target element. We wanted to give you this flexibility, so you can customize your thresholds to fit your needs.    

Availability

Testim Mobile’s new Machine Learning Locators will be available to all users on paid plans. These will be enabled by default, but you may switch back to the traditional “Fallback locators mode,” if you prefer. For more information, see https://help.testim.io/docs/editing-target-element-properties-copy.

Experience the ML locator’s power and share your thoughts with us 💬
Enjoy the ride & supercharge your mobile testing!

 

The post How we built Machine Learning Locators for Testim Mobile appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
EnterpriseOps for Built-In Quality https://www.testim.io/blog/enterpriseops-for-built-in-quality/ Mon, 30 Oct 2023 12:32:17 +0000 https://www.testim.io/?p=7702 Ask two people the definition of DevOps these days and if you get fewer than five answers, you are doing well. My casual definition of DevOps is “a culture around how software is developed and delivered into production operations.” That does not mean everyone does it the same way. One primary and fundamental characteristic of...

The post EnterpriseOps for Built-In Quality appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
Ask two people the definition of DevOps these days and if you get fewer than five answers, you are doing well. My casual definition of DevOps is “a culture around how software is developed and delivered into production operations.” That does not mean everyone does it the same way. One primary and fundamental characteristic of this is continuous movement (hopefully progress). This means continuous development, testing, deployment, and monitoring. 

Where did quality (i.e. Quality Assurance) go? It’s assumed that this is fully embraced within development, but many have not seen it executed as such. Even so, the path forward is clear: Quality practices must adapt to a continuous model, or find themselves left behind. We need to start talking about “Built-in Quality” (BiQ) rather than Quality Assurance (QA).  QA is a verification step after the software has been developed.  When traditional quality teams start to embrace BiQ as their standard bearer, they will start to see that their traditional function no longer attempts to test quality in, but looks to ensure it is built in all along the development stream.

I have heard about the term EnterpriseOps, and I personally like this. EnterpriseOps, (short for Enterprise Operations), is a comprehensive set of activities and processes within an organization that are focused on efficiently and effectively managing its day-to-day operations, infrastructure, and resources to support its business objectives and ensure smooth functioning systems. Even if you are a startup, you can operate like an Enterprise by using the same sound guiding principles of DevOps. It involves a combination of strategic planning, process optimization, technology adoption, and ongoing monitoring and improvement efforts. The more complex the organization is, the more I think this definition works.

How does built-in quality work within the EnterpriseOps model? 🧐
Is it compatible with Agile software delivery methods & a DevOps culture? Yes! ✅

Below is a list of common traits and how it fits:

  • Establishing Quality Standards: Quality standards are frequently defined by agile teams in the form of “Definition of Done” (DoD) or acceptance criteria for user stories and features. These guidelines ensure that the team is on the same page about what defines a completed and high-quality product increment. DevOps teams establish quality requirements for the software they produce, which are frequently expressed as service-level objectives (SLOs) or service-level indicators (SLIs). These standards aid in ensuring that the program satisfies performance and reliability expectations.
  • Process Documentation: While Agile approaches prioritize functioning software over detailed documentation, they also advocate the provision of lightweight process documentation to help teams understand how to accomplish tasks consistently. Despite the fact that DevOps stresses automation and code-driven infrastructure, documentation is still necessary for understanding and maintaining infrastructure-as-code (IaC) and configuration management scripts.
  • Quality Metrics and KPIs: Agile teams track progress and product quality using a variety of metrics and KPIs. Velocity, burn-down charts, and lead time are examples of common measurements. These indicators assist teams in evaluating their performance and identifying opportunities for development. Metrics and key performance indicators (KPIs) are used by DevOps teams to monitor the performance and stability of their systems. Uptime, response time, and error rates are all common measurements.
  • Continuous Monitoring: Agile teams inspect and adapt their work on a regular basis through ceremonies such as Sprint Reviews and Daily Stand-ups. These ceremonies allow teams to track progress and address problems when they develop. DevOps encourages constant monitoring of apps and infrastructure in order to detect and respond to problems in real time. Monitoring tools and techniques are critical components of DevOps.
  • Issue Identification: Sprint Retrospectives empower Agile teams to openly identify and discuss concerns. Teams can talk on what worked well, what didn’t, and how they can better in the next iteration. DevOps promotes a “blame-free culture” in which problems are viewed as opportunities for improvement. Incident reviews and post-mortems are popular techniques for identifying and collaboratively addressing issues.
  • Corrective and Preventive Actions (CAPA): Continuous improvement is embraced by agile teams. During each Sprint or work cycle, they prioritize addressing issues and improving processes iteratively. DevOps teams take the “fail fast, learn fast” approach. When problems arise, they prioritize speedy resolution and put preventive measures in place to avoid similar problems in the future.
  • Process Improvement: Agile’s iterative and incremental methodology promotes process improvement by definition. Teams constantly improve their methods in response to feedback and lessons learned from each Sprint or iteration. DevOps promotes a culture of constant improvement. Teams evaluate their procedures and tools on a regular basis, looking for ways to streamline and improve their delivery pipeline.
  • Training and Development: Cross-functional communication and skill development are encouraged in agile teams. Members of a team may cycle jobs to enhance their skill sets, and training is frequently incorporated into the Agile process. DevOps teams frequently spend time cross-training team members to provide them with a broader skill set. This aids in automation, infrastructure management, and other DevOps-related tasks.
  • Compliance and Regulatory Adherence: Agile teams can modify their methods to accommodate compliance and regulatory constraints. In their backlogs, they may include compliance-related user stories or tasks. Compliance checks and automated testing can be added to DevOps processes to guarantee that software and infrastructure fulfill regulatory standards.
  • Feedback Loop: The relevance of client input is emphasized in agile principles. Agile teams prioritize user feedback to inform product development and ensure that the end product satisfies the needs of the client. DevOps promotes regular feedback loops with stakeholders and operational teams. This feedback is used to enhance both the product and the delivery method..
  • Automation and Technology: To improve product quality and decrease manual errors, agile teams use automation technologies for testing, continuous integration, and delivery, which align with quality assurance initiatives. Automation is a fundamental principle of DevOps. Automation tools are used by teams for infrastructure provisioning, testing, deployment, and monitoring, all of which align with quality assurance initiatives.
  • Reporting and Communication: Agile techniques place a premium on transparency and communication. Common rituals such as Sprint Reviews ensure teams routinely report to stakeholders on their progress, impediments, and quality indicators. DevOps encourages openness and collaboration. Through channels such as chatops, dashboards, and regular meetings, teams communicate openly about their work, progress, and challenges.

EnterpriseOps is one approach that lays out ways of improving operational efficiency, customer happiness, and overall organizational performance. It does not disregard built-in quality, performance, security, or any other aspect that may have been regarded as less important from a developer-centric standpoint. Quality assurance should be integrated into every stage of the software delivery pipeline, from development through deployment and operations, in a DevOps culture. EnterpriseOps allows for this every step of the way. 

💡 What do you think of the EnterpriseOps term – Is this something you can use for your organization? 

The post EnterpriseOps for Built-In Quality appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
A Developer’s Take on QA https://www.testim.io/blog/a-developers-take-on-qa/ Mon, 24 Jul 2023 15:56:11 +0000 https://www.testim.io/?p=7529 Testim by Tricentis, originally named Testim, was created in 2014 to help anyone author automated tests. At the time, Selenium was king and tools to help developers automate their tests (e.g. Cypress, Puppeteer, and Playwright) were science fiction. The beginning In the beginning, Testim’s primary audience was QA engineers who wanted to avoid all the...

The post A Developer’s Take on QA appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>
Testim by Tricentis, originally named Testim, was created in 2014 to help anyone author automated tests. At the time, Selenium was king and tools to help developers automate their tests (e.g. Cypress, Puppeteer, and Playwright) were science fiction.

The beginning

In the beginning, Testim’s primary audience was QA engineers who wanted to avoid all the hassle in automation—grids, schedulers, edge cases in browsers like IE or Safari. As Testim evolved, so too did the audience using it. Our next big constituent? Developers.

Yes, the same developer who wears shorts, rocks a cool t-shirt from their last conference, and commands a long beard (probably). However, getting a developer out of their comfort zone can be hard, if not an outright fairy tale. To a developer, the idea of expressing a test in anything other than code is frightening. Let’s call these developers the “dev persona.”

How we approached our new audience

To better engage the dev persona, features like custom actions and command-line interface (CLI) steps were created to address use cases Testim typically wouldn’t handle. Even after shipping these cool features, one major pain point for developers remained: debugging failed tests. But before we dig into how failed tests are debugged, how do we get developers comfortable with writing tests?

Developer and QA – part two

As you may remember from my earlier blog post, I talked about how developers shouldn’t fear QA. This process calls for involving developers throughout the QA process, starting with writing the software test plan (STP) for a feature and extending to automating the test:

  • First, the developer must talk with a QA engineer to learn how to think like a QA engineer. For example, if we have an API that receives a number in the payload, the obvious thought is “OK, let’s just check that the number is not empty and finish.” But an API payload could receive strings and objects or even a number that is not in the range of the constraint.
  • Second, once we understand how a QA engineer thinks, we as developers can adapt our approach to writing a feature, effectively improving the feature itself and how we build in the future.
  • Third—and this might be the most important—we learn how other people think and get to know other people in the company, which can be just as important as delivering a high-end feature.

Meet Testim’s new features

I hope I can get my fellow developers excited about writing STPs. After all, we share the same mission—taking our organization (or groundbreaking startup) to new heights to disrupt the industries we work in. Let’s now talk about some of the new tools you’re adding to your toolbox with Testim:

Debugging 2.0

This new set of debugging features will take you to the moon and back—and dazzle like a new clip from Lady Gaga. First, the navigation between steps is greatly improved, visually appealing, and easy-to-use, just like your familiar debugger:

Viewing the different steps and finding the one of interest is much easier while actively debugging. In addition, you can drag and scroll across the steps in the navigator editor while working to debug steps without any risk of interfering.

And here’s the cherry on top—when debugging a custom step, the test will pause, highlight how to open Chrome DevTools, and show how to debug the code, just like any other code debugger. That’s right! No more transposing the code elsewhere. No more switching away from the test and the page being executed.

Logs, logs, logs

Everyone does it—we put console logs in our front-end application to see what’s going on with our custom code and if we are developing a feature, we might temporarily put some console logs in our code base (pro-tip: don’t forget to remove them before merging). Before, we could only view the logs after the test concluded. Now, as the test progresses, the logs will actively appear in the shiny console at the bottom.

Scope

We know, we know. The step parameters were not the best representation of data we had. And even I, Roy, as a developer who uses Testim on a daily basis, thought it was horrific. Now, the step params have a cool representation in JSON, exactly like any debugger. This JSON view tree makes it easy to drill down a long object or just have it colorized.

Breakpoint managers

When working with a large and complex test, you might add multiple breakpoints you forgot about. As the test executes, it will pause (maybe several times) when arriving at a breakpoint we no longer need ..

Now, all the breakpoints the user adds will appear in the breakpoint manager:

When hovering over an entry in the breakpoint manager, you can do two things—go to the breakpoint or delete it. Simple and elegant.

Wrapping up

I hope you enjoyed this blog post. Our goal—to help people write better tests more easily—is the same today as it was when I joined. We’re taking Testim to where no other E2E platform has ever been.

Need more guidance, want to see the features in action?! Check out this video below👇🏻

 

The post A Developer’s Take on QA appeared first on AI-driven E2E automation with code-like flexibility for your most resilient tests.

]]>