

TestGuild Automation Podcast
Joe Colantonio
TestGuild Automation Podcast (formally Test Talks) is a weekly podcast hosted by Joe Colantonio, which geeks out on all things software test automation. TestGuild Automation covers news found in the testing space, reviews books about automation, and speaks with some of the thought leaders in the test automation field. We'll aim to interview some of today's most successful and inspiring software engineers, and test automation thought leaders.
Episodes
Mentioned books

Apr 28, 2024 • 41min
Mobile Mastery: Blending AI with App Testing with Dan Belcher
In this engaging discussion, Dan Belcher, co-founder of Mabl and ex-Google product manager, explores how AI is revolutionizing mobile app testing. He introduces the groundbreaking Mabl tool for iOS and Android validation, shifting focus to non-deterministic testing for AI models. The conversation dives into the evolution of testing methodologies, the rise of AI-driven frameworks, and the importance of user-centric approaches. Dan shares insights on the challenges of mobile environments and how AI's contextual understanding is reshaping automation for better user experiences.

Apr 21, 2024 • 32min
The Four Phases of Automation Testing Mastery with Jon Robinson
Welcome to the TestGuild Automation Podcast! I'm your host, Joe Colantonio, and Jon Robinson, chief storyteller at Nekst IT, ready to delve deep into the automation testing world. In today's episode, "The Four Phases of Automation Testing Mastery," we'll debunk the myth that automation is a cure-all solution and explore the intricacies and careful planning needed to succeed in test automation. Join us as we discuss practical strategic approaches including the 4-phase model—Discovery, Design, Develop, Deliver—and the importance of a maturity model to ensure your automation aligns with CI/CD integration. We'll highlight how automation serves as the backbone for regression testing, providing substantial long-term benefits and how pushing it later in the development process can minimize rework and costs. We'll tackle the challenges of test management in an agile world, and Jon will share his insights on the importance of storytelling in QA and how it can revolutionize how we test and communicate the value of our work. Expect actionable tips on avoiding common pitfalls and why focusing on real-world impacts and user perspectives can significantly improve your automation efforts. Prepare to elevate your testing strategy and learn why quality should be the focus rather than just hitting metrics. Tune in as we explore practical insights and real-life experiences that will empower you to enhance the success of your automation testing projects!

Apr 14, 2024 • 40min
Expert Take on Playwright, and API Testing with Bas Dijkstra
In today's episode, we are excited to feature the incredible insights of Bas Dijkstra, an independent test automation consultant and trainer with a wealth of experience spanning 17 years in the field. Bas joins us to share his journey in developing restassured.net, a much-needed library for HTTP API testing in C#, inspired by his fondness for RestAssured and its absence in the C# arena. We'll explore not just Bas's innovations but his comprehensive take on the evolution of API testing, spotlighting the persistent issue of superficial testing due to various industry pressures and the triumphs of more accessible tooling. We'll explore why Bas favors code-based solutions like RestAssured.Net for scaling and integration over tools like Postman regarding API testing. Furthermore, Bas will shed light on the rising interest in Playwright – a modern automation tool he believes overcomes many of the limitations of Selenium through features like auto-waiting and synchronization. We'll delve into the importance of context when selecting testing tools and why Bas advocates for workshop-based learning as a practical and empowering approach to introduce teams to Playwright. So, whether you're a seasoned professional or new to the field, join us to delve into the ever-evolving landscape of testing automation. Discover practical skills and mindsets to elevate your testing strategies. This is the TestGuild Automation Podcast, and today's episode is a must-listen for all testing professionals!

Apr 8, 2024 • 36min
Leveraging AI for Robust Requirements Analysis and Test Generation
Scott Aziz, founder of AgileAI Labs and a BDD expert with over 36 years of experience, teams up with John Smart, the creator of the Serenity BDD framework. They discuss the innovative Spec2test AI tool designed for user story optimization and actionable test generation. The tool uses a unique 7-point framework to analyze ambiguities in user stories and suggest improvements. They highlight how AI enhances team collaboration, transforms user stories into functional and security tests, and boosts productivity in agile environments.

Mar 31, 2024 • 37min
Awesome Software Testing Strategies with Matthew Heusser and Michael Larsen
I'm thrilled to have two titans in software testing, Matt Heusser and Michael Larsen, with us today. These veterans, with their wealth of experience and knowledge, are here to discuss their latest contribution to the testing community, their new book, "Software Testing Strategies." In today's episode, we will unpack the inspiration behind "Software Testing Strategies," exploring the trio of testing essentials: skills, strategy, and the nuances of day-to-day operations, including the politics that intertwine with the testing process. The authors will discuss their approach to addressing the complexities of software testing, finding the most effective tests among endless possibilities, and how their book aims to guide you through these challenges. Matt and Michael will also share critical insights into organizational dynamics, the value of adaptability in the testing realm, and the ethical considerations professionals face in their careers. Plus, we'll touch on the difficult journey of updating outdated systems, navigating the minefield of communication, and why terms like "QA" may need a rethink. Listeners, you're in for a treat, with real-world stories, practical advice, and invaluable expertise that's just a discount code away – so stay tuned as we dive into the world of "Software Testing Strategies" on the TestGuild Automation Podcast.

Mar 24, 2024 • 30min
RoboCon Recap: Testing, Networking, and Building with Robot Framework with Tatu Aalto, Mark Moberts and Frank Van der Kuur
Today's special episode, "Robocon Recapp," is about the insights and highlights from Robocon 2024. We are privileged to have Tatu Aalto, a renowned maintainer of the browser library; Frank Van der Kuur, a distinguished Robot Framework trainer from BQA; and Mark Moberts, a well-known figure in the Robot Framework community, with us. In this episode, our guests will explore the enriching experiences of the conference—from the unveiling of the Market Square to the engaging sessions that sparked valuable discussions. We will explore the myriad contributions beyond programming, including documentation, testing, and being an active voice in the community through forums like the RobotFramework Slack channel. Throughout the Robocon, the spirit of collaboration and knowledge exchange was not just evident. Still, it was the driving force - whether it was addressing pitfalls in the framework, swapping tips on finding the right testing library, or even discussing Frank's and Tatu's interactive sessions that went beyond expectations with engaging questions and the impact they left on the audience. Get ready to be immersed in a conversation that not only recaps the energy and learning from Robocon but also showcases how the Robot Framework community is driving the future of test automation. So plug in as we dive into everything Robot Framework with insights from the experts at the forefront of the automation world. Listen up!

Mar 17, 2024 • 35min
Netflix SafeTest with Moshe Kolodny
Today, we're diving deep into the world of automation testing, with a special focus on Netflix's innovative tool, SafeTest. Joining us is Moshe Kolodny, the senior full-stack engineer at Netflix who is behind this exciting new tool, which is bridging the gap between end-to-end and unit testing. SafeTest, a tool that's been making significant strides in the industry, has garnered widespread community support and impressive traction in a remarkably short time. As we delve into its capabilities, we'll discover how SafeTest seamlessly integrates with popular libraries like Playwright and Jest, offering robust testing capabilities without imposing intrusive dependencies. Moshe will delve into the philosophy behind SafeTest, underlining the importance of practical, iterative test writing and the pitfalls of over-engineering. We'll explore SafeTest's adaptability, which ensures test consistency across environments with Docker mode, and the bidirectional communication it enables between browser and Node.js, enhancing the overall testing experience. Our conversation will shed light on the exciting future of SafeTest, from potential additions to the test runner to the introduction of custom reporting features. Moshe will also underscore the tool's commitment to developer experience, exemplified by SafeTest's debugging aids like videos and trace viewers. It's no secret that SafeTest reflects Netflix's robust approach to quality assurance. It aligns closely with the day-to-day experiences of UI engineers and addresses the intricate challenges of complex user interactions and service integrations. Stay tuned as we unpack the story of SafeTest's inception, core features, practical applications, and why Moshe believes it's a versatile choice for most testing scenarios.

Mar 11, 2024 • 30min
Agile, Automated, Advanced: The New Age of Performance Testing with Dylan van Iersel
In this conversation, Dylan van Iersel, an experienced IT consultant and co-founder of Perfana, discusses the critical role of performance engineering in modern software development. He highlights how performance directly links to business success and customer satisfaction. Dylan explains the importance of integrating performance testing early in the CI/CD pipeline and shares insights on scaling testing within agile teams. The discussion also touches on how Perfana leverages machine learning for anomaly detection, making performance testing more accessible and efficient for developers.

Mar 3, 2024 • 26min
Redefining Test Automation with Dave Piacente
In this episode, Dave Piacente, a senior community manager in developer relations and community expert at Applitools, joins us to talk about redefining test automation. There are a common set of techniques seasoned test automation practitioners know to be the pillars of any successful automated testing practice that can fit into most (if not all) team contexts. But some things have either been left out or are described in ways that are disadvantageous for the industry simply because we need to talk about it from the perspective of the fundamentals used to craft them. By reasoning from first principles, we can unearth a more impactful definition of test automation that can act as a compass and help supercharge everyone's test automation practice - while also understanding how to navigate the uncharted waters of new technologies that challenge existing paradigms.

Feb 26, 2024 • 23min
How to Create Testing Magic with Zero-code with Shubham Jain
In this episode of the TestGuild Automation Podcast, host Joe Colantonio speaks with Shubham Jain, the maintainer of Keploy, an API testing platform. Shubham shares insights into creating testing magic with zero code using Keploy, a tool that captures live traffic and generates test cases and data mocks for efficient API testing. Shubham shares features of Keploy's integration with the software development life cycle and its potential use cases. Shubham also delves into the open-source community for Keploy, providing tips for contributors and sharing information about upcoming events for those interested in learning more. Listen in to discover valuable advice for improving your mocking and automation testing efforts, emphasizing the importance of realistic mocks that capture real behavior.


