As artificial intelligence reshapes the software testing landscape, UK-based Nigerian tech leader Shirley Ugwa is at the forefront of this transformation. An independent researcher, published author, and quality assurance expert, she recently authored a groundbreaking research paper, “From Scripts to Intelligence: How AI is Reshaping the Future of Software Testing,” published in the World Journal of Advanced Engineering Technology and Sciences. In this exclusive conversation, she discusses the profound impact of AI on testing, the emergence of self-healing automation, the role of Large Language Models like ChatGPT in QA, and why human testers remain irreplaceable even as machines become more intelligent.
Your recent research paper explores AI’s transformational influence on software testing. What sparked your interest in this intersection of AI and quality assurance?
I’ve been working in quality assurance for years, and I’ve witnessed firsthand the evolution from manual testing to script-based automation. But what we’re experiencing now with AI is fundamentally different it’s not just about faster execution or better automation; it’s about introducing intelligence and adaptability into the testing process itself.
Traditional testing methods simply cannot keep pace with the increased complexity of modern software systems and the demand for rapid, high-quality releases. I realized that AI wasn’t just improving what we already do; it was fundamentally changing how we think about quality assurance. This realization compelled me to investigate deeply and document how machine learning, natural language processing, and neural networks are transforming our profession from reactive bug-catching to predictive, strategic quality engineering.
In your paper, you emphasize that traditional testing approaches are reaching their limits. Can you elaborate on these shortcomings?
Absolutely. Manual testing, though valuable for exploratory work, is inherently slow, error-prone, and unscalable. Human fatigue and oversight lead to missed defects, and the process becomes prohibitively expensive as systems grow more complex.
Even automation scripts which promised to solve these problems have proven brittle and maintenance intensive. A minor UI change, like renaming a button’s ID or repositioning an element, can break entire test suites. We’ve spent countless hours maintaining these fragile scripts rather than focusing on strategic quality improvements. This “test debt” accumulates over time, absorbing valuable resources without adding proportional value.
Furthermore, traditional approaches are reactive rather than predictive. We find bugs after they’re introduced instead of predicting where defects are likely to occur. In today’s agile, DevOps-driven environment where releases happen continuously, this reactive approach simply doesn’t cut it anymore.
How exactly is AI addressing these fundamental limitations?
AI introduces adaptive intelligence that static scripts cannot provide. Through machine learning, testing systems can analyze historical data test results, bug reports, code changes to identify patterns and predict where defects are most likely to occur. This enables proactive, risk-based testing where we focus resources on high-risk areas rather than blindly testing everything.

Self-healing test scripts represent a particularly exciting breakthrough. When application elements change, AI algorithms can automatically identify the modified elements and update test scripts dynamically without human intervention. This dramatically reduces maintenance overhead and keeps automation suites resilient despite constant application changes.
Essentially, AI shifts testing from a manual, execution-focused activity to an intelligent, strategy-focused discipline where machines handle repetitive tasks and humans focus on judgment, creativity, and risk assessment.
You’ve explored generative AI extensively, particularly Large Language Models like ChatGPT. How are these technologies specifically transforming testing workflows?
Generative AI, especially LLMs like ChatGPT, represents a paradigm shift in how we create and manage tests. These models have been trained on massive text corpora and can understand context, generate coherent scenarios, and even produce automation code from natural language descriptions.
In my research, I demonstrated how ChatGPT can generate comprehensive test scenarios from simple user stories. For example, given a requirement like “users should be able to log in with their email and password,” ChatGPT instantly produces dozens of test cases covering positive flows, negative scenarios, edge cases, boundary conditions, and security considerations work that would take human testers hours or days to complete manually.
Beyond test generation, LLMs assist with exploratory testing by suggesting scenarios testers might overlook, analyzing API documentation to create functional tests, automatically generating bug reports, and even producing automation scripts for platforms like Selenium or Cypress. This synergy between planning and execution makes development cycles significantly faster.
The concept of “self-healing tests” appears frequently in your work. Can you explain how this technology works and why it’s so significant?
Self-healing tests represent one of AI’s most practical applications in quality assurance. Historically, test maintenance has been extraordinarily expensive every time developers update the UI or refactor code, someone must manually update corresponding test scripts. This maintenance burden often consumes more resources than creating tests initially.
Self-healing automation uses AI to detect when application elements have changed and automatically adapts test scripts accordingly. The system captures multiple attributes for each element ID, name, CSS selector, XPath, text content, and relative positioning. When a test fails because an element can’t be found, the AI doesn’t immediately mark it as failed. Instead, it analyzes alternative attributes and positioning to locate the functionally equivalent element, then updates the script dynamically.
This capability is transformative because it dramatically reduces the maintenance burden that has plagued automation for decades. Tests remain resilient despite continuous application evolution, allowing QA teams to focus on strategic quality improvements rather than script repairs.
You argue strongly that AI enhances rather than replaces human testers. Why is this human element so crucial even as AI capabilities expand?
This is perhaps the most important message from my research: AI does not replace human testers it augments them, freeing them to focus on activities that require uniquely human capabilities like judgment, empathy, creativity, and contextual understanding.
AI excels at pattern recognition, data processing, repetitive execution, and scalability. Humans excel at understanding user needs, assessing business risks, making ethical judgments, designing test strategies, and recognizing nuanced problems that don’t fit established patterns. These complementary strengths create a powerful synergy.
The future belongs to hybrid human-AI testing models where machines handle routine, data-intensive tasks test execution, log analysis, defect clustering, script maintenance while humans focus on strategic activities: defining quality standards, interpreting AI insights in business context, making risk-based decisions, mentoring junior team members, and ensuring that testing aligns with organizational goals.
Similarly, while AI can generate thousands of test scenarios, human testers curate which tests matter most, validate that generated tests make sense, and ensure comprehensive coverage of critical user journeys. This collaborative intelligence humans training AI and AI expanding human capacity creates outcomes neither could achieve independently.
Any closing thoughts on what this AI revolution means for the future of software quality?
We’re witnessing a fundamental redefinition of what quality assurance means. We’re moving from QA as an end-of-cycle checkpoint to quality as a continuous, intelligent, predictive discipline embedded throughout the entire software lifecycle.
The organizations that succeed will be those that embrace hybrid human-AI models where technology amplifies human judgment rather than replacing it. They’ll invest in upskilling their QA teams, choose AI tools that enhance rather than supplant human expertise, and cultivate cultures where quality is everyone’s responsibility, not just QA’s.
For professionals in Nigeria and across Africa, this AI revolution presents extraordinary opportunities. We can leapfrog traditional testing maturity stages by adopting intelligent testing practices now. We don’t need decades of legacy infrastructure we can build modern, AI-powered quality practices from the ground up. But this requires investment in training, infrastructure, and most importantly, mindset shifts about what quality assurance can be.
The future belongs to those who recognize that exceptional software isn’t merely built it’s thoughtfully tested, carefully nurtured, and elevated by the powerful combination of human insight and artificial intelligence. That future is not distant it’s here now, and it’s ours to shape.




























