Friday, 8 August 2025

QA Terminology

Below are the most commonly used QA terminologies that every QA professional ought to know. Make sure you’re familiar with all of them:

  1. Test Case: A set of actions used to determine if a system behaves as expected in a given scenario.
  2. Bug/Defect: A problem or error that hinders a software program from functioning as expected.
  3. Smoke Testing: Basic tests to check if a new software version is stable enough for more in-depth testing.
  4. Regression Testing: Verifying if previously working functionality still works after new changes.
  5. Unit Testing: Testing the smallest testable parts of an application in isolation (e.g., functionality like a button, link, dropdown, etc). If an application were a physical machine, it would be like testing the quality of the nuts, bolts and transistors before assembled into a more recognizable part of the machine, for instance a control panel. This testing is commonly done by developers.
  6. Integration Testing: Testing in which individual components or units are added one by one and tested progressively until the group of components are successfully tested. If an application were a physical machine, integration testing would be like testing how the previously tested nuts and bolts work together once assembled. For example, a control panel, which is only a part of a machine, must have many components or units that must be assembled to test that the panel itself works as it should.
  7. System Testing: Testing the integrated parts together to ensure it meets the requirements. If an application were a machine, in addition to a control panel, there would be other machine parts similarly assembled and tested. System testing is like seeing whether the entire machine works as expected.
  8. UAT (User Acceptance Testing): The final testing phase where actual users try the software to make sure it works in real-life scenarios. The users can be individual people or a company for which the product/software was created.
  9. Test Suite: A collection of test cases that have been grouped for a specific purpose.
  10. Sanity Testing: Testing that uses a subset of regression tests to quickly check that a new software build/version works as expected. Some people consider this the same as smoke testing.
  11. Black Box Testing: Testing software based on output, without knowing its internal workings. For example, when an end-user interacts with the website’s UI without having access to the code.
  12. White Box Testing: Testing software with knowledge of its internal workings.
  13. Test Plan: A detailed document outlining the testing strategy, objectives, resources, schedule, and deliverables.
  14. Test Script: Step-by-step instructions for a particular test.
  15. Test Scenario: A high-level idea of what to test. It can have multiple test cases.
  16. Exploratory Testing: Testing the software without a set plan, exploring and learning the application.
  17. Boundary Testing: Testing the limits (edges or boundaries) of the software input.
  18. Functional Testing: Testing that software features work as expected. This is an umbrella term under which several of the other types of testing already defined are grouped (i.e., Sanity, Smoke, Regression, UAT).
  19. Non-functional Testing: Testing non-operational aspects of a software, like performance, usability, or security.
  20. Test Environment: A controlled setting where testing is conducted.

Thursday, 7 August 2025

What is Tosca Copilot ?

Tosca Copilot is a generative AI-powered assistant integrated into Tricentis Tosca, designed to enhance user productivity in testing. It utilizes advanced large language models (LLMs) to help users quickly find, understand, and optimize test assets. Tosca Copilot facilitates tasks like explaining test cases, converting natural language to Tosca Query Language (TQL), and summarizing test execution results, ultimately aiming to boost efficiency and quality across the testing lifecycle. 

Tosca Copilot is part of the broader Tricentis Copilot program, which includes AI-powered assistants for other Tricentis products like Testim and qTest. This program aims to accelerate test creation, improve test quality, and simplify testing across the entire software development lifecycle.

Here's how Tosca Copilot can help you:
  • Test Case Generation: Automatically create test cases from natural language descriptions or user stories, including edge cases and variations, accelerating the test design process.
  • Test Optimization: Streamline your test suite by identifying and suggesting removal of unused, duplicate, or unlinked test cases, or by recommending optimizations to existing ones for better coverage and efficiency.
  • Test Result Insights: Understand failed tests faster with actionable insights generated by Tosca Copilot, aiding in quick troubleshooting and issue resolution.
  • Maintenance and Refactoring: Perform maintenance tasks like renaming test steps, cleaning up labels, or searching for specific test artifacts using chat commands.
  • Onboarding and Learning: New team members can quickly grasp complex test cases and Tosca functionalities through explanations provided by the Copilot, according to Tricentis. 
How it works:
Tosca Copilot leverages advanced Large Language Models (LLMs) to understand natural language requests and interact with Tosca, performing actions like: 
  • Converting natural language instructions into Tosca Query Language (TQL) queries for finding and managing test assets.
  • Generating test case descriptions, steps, and data from user stories or specifications.
  • Analyzing execution logs and providing insights into test failures in easily understandable language. 
Benefits:
  • Time Savings: Reduced manual effort for creating tests, generating data, and analyzing results translates to faster testing cycles and quicker time-to-market.
  • Increased Productivity: Testers can focus on more strategic tasks, and new team members can get up to speed faster with the Copilot's assistance.
  • Cost Savings: By optimizing test suites, reducing redundant efforts, and minimizing manual maintenance, Tosca Copilot helps lower overall testing costs.
  • Improved Software Quality: Automated test generation and defect analysis lead to higher test coverage, early bug detection, and ultimately, better quality software. 

In conclusion, Tosca Copilot helps streamline test automation by assisting with various tasks throughout the testing lifecycle. It utilizes generative AI to enhance productivity, accelerate learning, reduce costs, and ultimately deliver higher quality software.
Tosca Copilot's capabilities:
Natural Language Interaction:
Tosca Copilot allows users to interact with Tosca using natural language, making it easier to query and understand test assets. 
Test Case Explanation:
It can explain the functionality of a test case in plain language, providing insights into its purpose and steps. 
TQL Query Generation:
Users can convert their natural language queries into Tosca Query Language (TQL) to search and filter test assets effectively. 
Test Optimization:
Tosca Copilot helps in identifying unused or unlinked test assets, duplicates, and other inefficiencies, enabling users to streamline their test libraries. 
Troubleshooting:
It aids in understanding failed test executions by summarizing the results and providing insights into potential causes. 
Integration with Microsoft Azure OpenAI Service:
Tosca Copilot leverages the power of Microsoft Azure OpenAI Service, ensuring enterprise-level data privacy and security compliance. 

Monday, 4 August 2025

How to View Regression Status in 1 Click ?

We have to create BAT file and double click on file We can  check Pass/Fail status in just one click — no Commander needed. 

Creating BAT file: 

🔧 𝐒𝐭𝐞𝐩 𝟏 – 𝐂𝐫𝐞𝐚𝐭𝐞 𝐚 𝐓𝐂𝐒 𝐟𝐢𝐥𝐞 𝐆𝐞𝐭𝐑𝐞𝐬𝐮𝐥𝐭𝐬.𝐓𝐂𝐒

𝑔𝑒𝑡 𝑁𝑢𝑚𝑏𝑒𝑟𝑂𝑓𝑇𝑒𝑠𝑡𝐶𝑎𝑠𝑒𝑠𝑃𝑎𝑠𝑠𝑒𝑑
𝑔𝑒𝑡 𝑁𝑢𝑚𝑏𝑒𝑟𝑂𝑓𝑇𝑒𝑠𝑡𝐶𝑎𝑠𝑒𝑠𝐹𝑎𝑖𝑙𝑒𝑑

🔧 𝐒𝐭𝐞𝐩 𝟐 – 𝐂𝐫𝐞𝐚𝐭𝐞 𝐚𝐧𝐨𝐭𝐡𝐞𝐫 𝐟𝐢𝐥𝐞 𝐑𝐮𝐧𝐓𝐐𝐋𝐓𝐨𝐆𝐞𝐭𝐓𝐡𝐞𝐒𝐭𝐚𝐭𝐮𝐬.𝐓𝐂𝐒

This one jumps to Execution List and calls the result fetcher:

𝐽𝑢𝑚𝑝𝑇𝑜𝑁𝑜𝑑𝑒 "/𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛/𝐸2𝐸 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛/𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛𝐿𝑖𝑠𝑡𝑠"
𝐹𝑜𝑟 =>𝑆𝑈𝐵𝑃𝐴𝑅𝑇𝑆:𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛𝐿𝑖𝑠𝑡 𝐶𝑎𝑙𝑙𝑂𝑛𝐸𝑎𝑐ℎ "𝐷:\\𝑇𝑜𝑠𝑐𝑎\\𝑇𝑐𝑆ℎ𝑒𝑙𝑙\\𝐺𝑒𝑡𝑃𝑎𝑠𝑠𝐹𝑎𝑖𝑙𝑅𝑒𝑠𝑢𝑙𝑡\\𝐺𝑒𝑡𝑅𝑒𝑠𝑢𝑙𝑡.𝑡𝑐𝑠"
𝐸𝑥𝑖𝑡

🔧 𝐒𝐭𝐞𝐩 𝟑 – 𝐂𝐫𝐞𝐚𝐭𝐞 𝐒𝐜𝐡𝐞𝐝𝐮𝐥𝐞𝐁𝐚𝐭𝐜𝐡𝐅𝐢𝐥𝐞.𝐁𝐚𝐭

This will open the workspace, authenticate, and call the above script:

"𝐶:\𝑃𝑟𝑜𝑔𝑟𝑎𝑚 𝐹𝑖𝑙𝑒𝑠 (𝑥86)\𝑇𝑅𝐼𝐶𝐸𝑁𝑇𝐼𝑆\𝑇𝑜𝑠𝑐𝑎 𝑇𝑒𝑠𝑡𝑠𝑢𝑖𝑡𝑒\𝑇𝑜𝑠𝑐𝑎𝐶𝑜𝑚𝑚𝑎𝑛𝑑𝑒𝑟\𝑇𝐶𝑆ℎ𝑒𝑙𝑙.𝑒𝑥𝑒" -𝑤𝑜𝑟𝑘𝑠𝑝𝑎𝑐𝑒 "{𝑊𝑜𝑟𝑘𝑠𝑝𝑎𝑐𝑒𝑃𝑎𝑡ℎ}" -𝑙𝑜𝑔𝑖𝑛 "{𝑈𝑠𝑒𝑟𝑁𝑎𝑚𝑒}" "{𝑃𝑎𝑠𝑠𝑤𝑜𝑟𝑑}" "{𝑅𝑢𝑛𝑇𝑄𝐿𝑇𝑜𝐺𝑒𝑡𝑇ℎ𝑒𝑆𝑡𝑎𝑡𝑢𝑠.𝑇𝐶𝑆 𝐹𝑖𝑙𝑒 𝑃𝑎𝑡ℎ}"

🔧 𝐒𝐭𝐞𝐩 𝟒 – 𝐂𝐫𝐞𝐚𝐭𝐞 𝐨𝐧𝐞 𝐟𝐢𝐧𝐚𝐥 𝐁𝐀𝐓 𝐟𝐢𝐥𝐞 𝐭𝐨 𝐬𝐭𝐨𝐫𝐞 𝐨𝐮𝐭𝐩𝐮𝐭 𝐢𝐧 𝐚 .𝐭𝐱𝐭 𝐟𝐢𝐥𝐞

"{𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒𝐵𝑎𝑡𝑐ℎ𝐹𝑖𝑙𝑒.𝐵𝑎𝑡 𝑓𝑖𝑙𝑒 𝑝𝑎𝑡ℎ}" > "{𝑌𝑜𝑢𝑟𝑅𝑒𝑠𝑢𝑙𝑡𝐹𝑜𝑙𝑑𝑒𝑟𝑃𝑎𝑡ℎ\𝑇𝑒𝑠𝑡𝑅𝑒𝑠𝑢𝑙𝑡.𝑡𝑥𝑡}"

✅ End Result?

You get a TestResult.txt file with Pass/Fail counts ✔️
→ Quick
→ Clean
→ Commander-free

🔁 𝐁𝐨𝐧𝐮𝐬 𝐓𝐢𝐩: 𝐒𝐜𝐡𝐞𝐝𝐮𝐥𝐞 𝐭𝐡𝐢𝐬 𝐁𝐀𝐓 𝐟𝐢𝐥𝐞 𝐰𝐢𝐭𝐡 𝐖𝐢𝐧𝐝𝐨𝐰𝐬 𝐓𝐚𝐬𝐤 𝐒𝐜𝐡𝐞𝐝𝐮𝐥𝐞𝐫 — 𝐠𝐞𝐭 𝐲𝐨𝐮𝐫 𝐫𝐞𝐩𝐨𝐫𝐭 𝐫𝐞𝐚𝐝𝐲 𝐛𝐞𝐟𝐨𝐫𝐞 𝐲𝐨𝐮𝐫 𝐜𝐨𝐟𝐟𝐞𝐞 𝐛𝐫𝐞𝐰𝐬 ☕


REM Example BAT file content cd "C:\Program Files\TRICENTIS\Tosca Commander" TCShell.exe -f "C:\Path\To\Your\Script.tcs" -w "C:\Path\To\Your\Workspace.tws" -u "YourUsername"
-p "YourPassword"

Sunday, 3 August 2025

What is testRigor ?

 testRigor is an AI Agent that allows anyone to create end-to-end tests from an end user's perspective using plain English, therefore eliminating excessive test maintenance related to locator changes. testRigor supports testing on the following platforms:

  • Web testing (Windows, MacOS, Ubuntu) and Mobile Web testing on iOS and Android
  • Native and Hybrid Mobile App testing for iOS and Android
  • Native Desktop applications testing
  • Mainframe application testing
With testRigor, you can perform various types of testing, including:
  • Acceptance testing
  • Smoke testing
  • Regression testing
  • System (end-to-end) testing
  • API testing
  • Visual testing
  • SMS and phone call testing
  • 2FA and Captcha testing
To create your end-to-end tests, you have several options:
  • Leverage testRigor's Generative AI to create tests based on descriptions
  • Write tests from scratch using plain English commands (See this documentation for help)

testRigor

 What is testRigor ?

What is Execution Recording in Tosca ?

--When you run a test in Tosca, Execution Recordingautomatically keeps track of:
-What steps were performed
-What data was used
-What passed or failed
Benefit-This helps in finding issues, showing proof of testing, and creating reports.
🛠️ How Does It Work?
1. Turn on Execution Recording
* You can switch it on in the Tosca settings.
* Path: Project → Settings → Execution → ExecutionLogEnabled = True
2. Run Your Test
Run your test from Tosca Commander*or an ExecutionList.
* Tosca will record:
-Every step clicked or typed
-Input/output data
-Results of each step (Pass/Fail)
-Screenshots (if enabled)
3. See the Results
* Go to the ExecutionList → ActualLog.
* You can get below:
-Which steps ran
-Which ones passed or failed
-Any error messages
-Screenshots (if set up)
4. Detailed View
--Click on any test step to check:
-What was expected
-What was actually done
-What data was used
5. Export or Share
You can save the results as:
-PDF
-Excel
-HTML
You can send them to tools like: qTest,JIRA,Jenkins.

Wednesday, 18 June 2025

LLMs

LLMs :- Large language models (LLMs) are computer programs designed to understand and generate human language. 

Think of them as incredibly advanced statistical models, trained on massive amounts of text data. 

They work by learning the patterns and relationships between words, phrases, and sentences. 

When you give ChatGPT a prompt, it uses its learned patterns to predict the most likely next word, then the next, and so on, effectively generating a coherent response. 

Prompt EngineeringPrompt engineering is an increasingly important skill set needed to converse effectively with large language models (LLMs), such as ChatGPT.

Prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated output. 

Prompts are also a form of programming that can customize the outputs and interactions with an LLM. 

How do prompt patterns enhance LLM interactions?

Prompt patterns enhance LLM interactions by providing structured, reusable solutions to common problems, enabling more efficient, accurate, and tailored outputs. ​ Here’s how they improve interactions:

  1. Customization of Outputs: Patterns like Output Customization (e.g., Persona, Template) allow users to tailor the format, structure, or role of the LLM's output, ensuring it meets specific needs or goals.

  2. Error Identification and Resolution: Patterns such as Fact Check List and Reflection help identify inaccuracies in the LLM's output and provide explanations or reasoning, improving reliability and trustworthiness.

  3. Improved Input Quality: Patterns like Question Refinement and Alternative Approaches guide users to ask better questions or explore multiple solutions, reducing trial-and-error and enhancing the quality of interactions.

  4. Enhanced Interaction Flow: Patterns like Flipped Interaction and Game Play shift control to the LLM, enabling it to ask questions or guide users through tasks, making interactions more dynamic and goal-oriented.

  5. Automation of Tasks: Patterns like Output Automater generate scripts or automation artifacts, reducing manual effort and streamlining workflows.

  6. Context Management: The Context Manager pattern allows users to specify or remove context, ensuring the LLM focuses on relevant topics and avoids distractions.

  7. Visualization Support: The Visualization Generator pattern enables LLMs to produce text-based inputs for visualization tools, making complex concepts easier to understand.

  8. Adaptability Across Domains: Prompt patterns are generalizable, allowing users to apply them in diverse fields, from software development to education and entertainment.

By codifying these approaches, prompt patterns improve the efficiency, accuracy, and creativity of LLM interactions, enabling users to achieve their goals more effectively.

What are the key benefits of using prompt patterns?

The key benefits of using prompt patterns are:

  1. Reusable Solutions: Prompt patterns provide structured, reusable approaches to solve common problems, reducing the need for users to design prompts from scratch.

  2. Enhanced Output Quality: Patterns like Output Customization and Reflection ensure outputs are tailored, accurate, and aligned with user goals.

  3. Error Identification: Patterns such as Fact Check List help users identify inaccuracies or assumptions in LLM outputs, improving reliability.

  4. Improved Interaction Flow: Patterns like Flipped Interaction and Game Play make interactions more dynamic, engaging, and goal-oriented.

  5. Automation of Tasks: Patterns like Output Automater generate scripts or automation artifacts, reducing manual effort and streamlining workflows.

  6. Context Control: The Context Manager pattern allows users to specify or remove context, ensuring focused and relevant responses.

  7. Exploration of Alternatives: Patterns like Alternative Approaches encourage users to explore multiple solutions, reducing cognitive biases and improving decision-making.

  8. Adaptability Across Domains: Prompt patterns are generalizable, enabling their application in diverse fields, such as software development, education, and entertainment.

  9. Visualization Support: The Visualization Generator pattern enables the creation of text-based inputs for visualization tools, making complex concepts easier to understand.

  10. Scalability and Efficiency: Patterns like Infinite Generation allow repetitive tasks to be automated, saving time and effort.

By leveraging these benefits, prompt patterns enhance the effectiveness, creativity, and reliability of interactions with large language models (LLMs).

Can prompt patterns be adapted for different domains?

Yes, prompt patterns can be adapted for different domains.While many patterns are discussed in the context of software development, they are generalizable and applicable across various fields. ​ Here’s how they can be adapted:

  1. Domain-Specific Customization: Patterns like Persona and Template can be tailored to specific roles or formats relevant to a domain, such as acting as a medical expert, legal advisor, or educator.

  2. Error Identification: Patterns like Fact Check List can be used to flag inaccuracies in fields like healthcare, law, or finance, ensuring outputs are reliable and domain-specific.

  3. Visualization: The Visualization Generator pattern can create text-based inputs for tools to visualize concepts in fields like education, engineering, or data analysis.

  4. Exploration of Alternatives: Patterns like Alternative Approaches can suggest multiple solutions tailored to domain-specific constraints, such as deployment strategies in cloud computing or treatment options in medicine.

  5. Interactive Learning: Patterns like Game Play can be adapted to create educational games for students in various subjects, such as history, science, or language learning.

  6. Context Control: The Context Manager pattern can focus LLM outputs on specific topics within a domain, such as security aspects in software or ethical considerations in law.

  7. Automation: Patterns like Output Automater can generate scripts or workflows for tasks in fields like business operations, research, or creative writing.

  8. Infinite Generation: This pattern can be used to generate repetitive outputs, such as practice questions for exams, story ideas, or product descriptions.

The adaptability of prompt patterns makes them valuable tools for enhancing LLM interactions across diverse domains, enabling users to achieve domain-specific goals effectively.

What are examples of domain-specific prompt patterns?

Examples of domain-specific prompt patterns include:

1. Healthcare

  • Fact Check List: Generate a list of medical facts or assumptions in a diagnosis or treatment plan for verification.
  • Persona: Act as a medical expert to provide advice on symptoms or treatment options.
  • Recipe: Provide step-by-step instructions for medical procedures or patient care plans.

2. Education

  • Game Play: Create educational games or quizzes for topics like history, math, or science.
  • Reflection: Explain the reasoning behind answers to help students understand concepts better.
  • Infinite Generation: Generate practice questions or exercises for students indefinitely.

3. Law

  • Persona: Act as a legal advisor to analyze contracts or provide legal interpretations.
  • Fact Check List: Highlight legal precedents or assumptions in case analysis for verification.
  • Template: Format legal documents, such as contracts or affidavits, using placeholders for specific details.

4. Finance

  • Alternative Approaches: Suggest different investment strategies or budgeting methods.
  • Reflection: Explain the rationale behind financial recommendations or calculations.
  • Context Manager: Focus on specific financial aspects, such as risk analysis or tax implications.

5. Software Development

  • Output Automater: Generate scripts to automate tasks like deployment or testing.
  • Visualization Generator: Create diagrams for system architecture or workflows using tools like Graphviz.
  • Question Refinement: Improve questions about code security or optimization.

6. Creative Writing

  • Infinite Generation: Generate story ideas or character profiles continuously.
  • Template: Format stories or poems using predefined structures.
  • Persona: Act as a famous author or poet to emulate their writing style.

7. Data Analysis

  • Visualization Generator: Create inputs for tools to generate charts, graphs, or data visualizations.
  • Recipe: Provide step-by-step instructions for cleaning, analyzing, and visualizing data.
  • Context Manager: Focus on specific aspects of data, such as trends or anomalies.

These examples demonstrate how prompt patterns can be tailored to meet the unique needs of various domains, enhancing the effectiveness of interactions with large language models (LLMs).