In the evolving landscape of Generative AI and Software Testing, the right prompt can mean the difference between valuable insights and surface-level answers. Inspired by Terry J. Fadem’s The Art of Asking: Ask Better Questions, Get Better Answers, this article explores how the principles of asking the right questions underpin the effectiveness of prompt engineering for Software Testers and Quality Engineers (QEs).
Why should Testers and QEs first master The Art of Asking Questions before blindly crafting Prompts?
Generative AI tools, like ChatGPT, are reshaping Software Testing by enhancing Test Workflows and empowering Testers to work smarter. But you know what? The real value comes from understanding how to craft effective prompts– something that is an art as much as a skill. Fadem’s insights on effective questioning apply directly to Prompt Engineering, turning it into a powerful tool for generating meaningful AI responses for Testers.
In this article, I delve into eight key insights from this obsessive book The Art of Asking, showing how they are essential for Prompt Engineering in the context of Software Testing. These best practices serve as a guide for testers to elevate their use of AI, transforming it into a truly impactful tool as they get to deliver world-class Quality.
Questioning before Prompting (Asking)
Every prompt starts with a clear purpose. Without a defined goal, prompts lack direction, and their effectiveness suffers. In Software Testing, it is crucial to determine what behavior or condition you want to validate.
What are you Asking for? – Are you focused on identifying security vulnerabilities, validating a specific feature, or exploring edge cases? A clear purpose allows you to craft prompts that yield valuable and targeted responses. Question yourself thoroughly before writing some wishful Prompt. Do not let the tools make too many assumptions of what you really want!
Are You Being Specific Enough?
Ambiguity is the enemy of productivity, really. Specific prompts deliver actionable outputs, whereas vague prompts yield generic responses. And some get carried away and even blame the GenAI tools without questioning themselves.
Example in Testing/QA/QE – Instead of saying, “Test the payment process,” a more precise prompt would be, “Generate test scenarios for a credit card payment involving expired cards, insufficient funds, and network failures.” This level of specificity drives more insightful and relevant AI responses.” This approach drives more insightful responses from AI for sure, regardless of the tool and its model.
Are You Challenging Your Own Assumptions?
Hidden assumptions often limit our scope in testing. To truly explore the potential risks, we must question those assumptions.
Example in Testing/QA/QE – For instance, instead of just saying “Generate test cases for file upload”, ask, “What scenarios could occur if a user attempts to upload files exceeding the allowed size limit or unsupported formats?” Challenging these assumptions broadens your coverage and helps uncover issues that are often missed in standard test planning. Challenging assumptions broadens your coverage and helps detect serious issues.
Are Your Questions Open-Ended?
Open-ended Prompts encourage creativity, providing broader insights. Closed Questions limit the possibilities.
Example in Testing/QA/QE – Instead of asking, “Will the system accept valid credentials always?” ask, “What are the possible scenarios where valid credentials might fail due to other factors, internal and external?” This way, the AI tool can think beyond basic validation and get into several layers behind the screens.
Are You Involving Others?
Collaboration is a powerful tool. Engaging other Testers/QEs/DevOps helps in refining prompts and uncovering angles we may not have considered.
Example in Testing/QA/QE – Team-based Prompt Refinement generates diverse perspectives, leading to better coverage and richer AI responses. In fact, that is what QE is all about, right? Show your Prompt to a colleague or a senior and ask “Do you think this is a smart Prompt?”.
Are You Avoiding Leading Questions?
Leading questions introduce bias, which wrongly influence results. Neutrality is crucial in Prompt Engineering to maintain objectivity.
Example in Testing/QA/QE – Instead of asking, “Why does this input type [some wrong value] make the system vulnerable?” a neutral prompt would be, “What vulnerabilities could arise from different input types such as [some wrong value]?” This allows the AI to explore possibilities without bias and give more Test Coverage with ease.
How Do You Break Down Your Questions for Better Clarity?
Complex prompts can be misunderstood by AI models. Breaking down complex questions into smaller parts leads to better comprehension and more relevant responses.
Example in Testing/QA/QE – Rather than asking a very generic, “What errors can occur during User Registration?” break it into parts like, “How do Old Cookies from my previous version of the User Registration module impact the new users trying to register with a different ID?” or “How does the backend failures impact User Registration?”. It all starts with your thoughts first.
Do you remember what “Garbage In Garbage Out” means? 🙂
Are You Iterating and Improving?
True achievement of Quality in Prompt Engineering, as most of us know, is an ongoing process. Refine prompts based on output Quality to enhance their effectiveness instead of laughing at the GenAI tools.
Continuous Refinement – Evaluate initial AI responses and adjust prompts as needed– alter rewording, focus the context, or break the prompts further down for better results. The rewards you get are not a first-time one-shot, you must refine and iterate your Prompts like a Ping-Pong game.
Happy "Asking", Dear Testers and QEs!
No. This post/article is not about the Best Practices of Prompt Engineering. It is about the foundational skill all of us can relook at. Mastering “The Art of Asking” is fundamental to becoming proficient in Prompt Engineering. By adapting Fadem’s principles to Software Testing and Quality Engineering on a whole, you can develop Prompts that extract deeper insights, streamline testing efforts, and fully harness the potential of Generative AI in delivering Quality with Agility!
Affiliate Disclosure: This page contains affiliate links, which means I may earn a commission if you make a purchase, at no extra cost to you.
Interested in diving deeper?
Consider purchasing the book– The Art of Asking: Ask Better Questions, Get Better Answers by Terry J. Fadem.
Use the Below Buying Options 👇
2 Responses
Hi Sir,
Asking better questions not only helps leaders obtain better answers but also enhances collaboration, innovation, and trust in professional and personal interactions.
Below are some keytakeways i learned from your article and book as well.
The Power of Questions: Asking the right questions is a critical skill for leaders and managers. Good questions help clarify goals, uncover opportunities etc..
Types of Questions: He categorizes questions into types, such as probing, clarifying. He also talked about the purpose, from gathering basic information to challenging assumptions and encouraging critical thinking.
Listening Actively: Asking questions is only part of the process but listening carefully to the responses is also crucial aspect. Active listening helps build trust, reveals deeper insights, and ensures the questioner fully understands the answer.
Timing and Framing: The effectiveness of a question often depends on how and when it’s asked.
Building Trust: Thoughtful questioning shows others that you’re genuinely interested in their perspectives, which helps build stronger, trust-based relationships.
Good one, Kishore. I realized that I have not narrated my points to cover all the target audience. Questioning is an essential skill at the day-to-day Test Engineer and Quality Engineer level, not only for managers and leaders, obviously! That said, I will update my article soon and thanks for highlighting your key takeaways.