Architecture
Support for additional technologies, e.g. Playwright, ElasticSearch, can be added by sub-classing these classes and adding specific steps, setup/teardown, and configuration. This allows reusing the basic configuration, reporting, logging, and retrying mechanisms. Further, application tests, steps, and configurations reuse by subclassing from technologies.
flowchart TD
A[Tests: Define BDD scenarios as series of steps, also define specific setup and teardown] --> |contains| B[Steps: encapsulate UI or API operations and verifications, and may be composed of other steps]
B --> |contains| C[Configurations: can be per environment, such as dev, qa, staging, and contain URLs, users, authentication schemes, encryption, etc.]
B --> |uses| D[Matchers: Hamcrest matchers for single objects or for iterables]
A --> |contains| C
B --> |uses| E[Models: domain objects]
subgraph Inheritance
A1[GenericTests] -.-> |inherits| A2[Tests]
B1[GenericSteps] -.-> |inherits| B2[Steps]
C1[AbstractConfiguration] -.-> |inherits| C2[Configuration]
end
Extending the Framework
To add support for a new technology (e.g., messaging, database), create: -
MyTechConfiguration(BaseConfiguration)-MyTechSteps(GenericSteps[MyTechConfiguration])-MyTechTests(AbstractTestsBase[MyTechSteps, MyTechConfiguration])This pattern ensures you reuse the core BDD, configuration, and reporting mechanisms.
Generic Type Parameters (Keep It Minimal)
Core inheritance rules:
- Steps always derive from GenericSteps.
- Tests always derive from AbstractTestsBase (the generic tests base).
- Configurations always derive from BaseConfiguration.
- Infrastructure steps/tests/configs must be extensible by design.
Use type parameters only when the domain requires them:
- Messaging (Kafka, RabbitMQ): Use
K(message key/index) andV(message payload) because handlers and matchers are keyed and payload-aware. - Non-messaging domains (REST, UI): Avoid
K/V. Use only the configuration type parameter (e.g.,TConfiguration) and, if needed, a steps type parameter.
This keeps the architecture generic while making it obvious when and why K/V appear.
Generic templates (PEP 695):
-
Steps (non-messaging):
class XSteps[TConfiguration: BaseConfiguration](GenericSteps[TConfiguration]) -
Tests (non-messaging):
class XTests[TSteps: XSteps[Any], TConfiguration: BaseConfiguration](AbstractTestsBase[TSteps, TConfiguration]) -
Steps (messaging):
class XSteps[K, V, TConfiguration: BaseConfiguration](GenericSteps[TConfiguration]) -
Tests (messaging):
class XTests[K, V, TSteps: XSteps[Any, Any, Any], TConfiguration: BaseConfiguration](AbstractTestsBase[TSteps, TConfiguration])
Guideline: do not hardcode configuration types in infrastructure classes; use a
TConfiguration type parameter bounded by BaseConfiguration (or a domain base
configuration) so users can extend configuration with extra properties.
classDiagram
%% Core Abstractions
class AbstractTestsBase {
<>
+steps
+_configuration
+setup_method()
+teardown_method()
}
class GenericSteps {
<>
+given
+when
+then
+and_
+with_
+retrying()
+eventually_assert_that()
}
class BaseConfiguration {
<>
+parser
}
%% UI Protocols
class UiElement {
<>
+click()
+type()
+clear()
+text
}
class UiContext {
<>
+find_element()
+find_elements()
+get()
}
%% Backend-Agnostic UI Layer
class UiConfiguration
class UiSteps {
+ui_context()
+at()
+clicking()
+typing()
+the_element()
}
%% Technology-Specific Extensions
class RestTests
class RestSteps
class RestConfiguration
class SeleniumTests
class SeleniumSteps
class SeleniumUiElement
class SeleniumUiContext
class PlaywrightTests
class PlaywrightSteps
class PlaywrightUiElement
class PlaywrightUiContext
%% Example: Custom Extension
class TerminalXTests
class TerminalXSteps
class TerminalXConfiguration
%% Core Relationships
AbstractTestsBase <|-- RestTests
AbstractTestsBase <|-- SeleniumTests
AbstractTestsBase <|-- PlaywrightTests
SeleniumTests <|-- TerminalXTests
GenericSteps <|-- RestSteps
GenericSteps <|-- UiSteps
UiSteps <|-- SeleniumSteps
UiSteps <|-- PlaywrightSteps
SeleniumSteps <|-- TerminalXSteps
BaseConfiguration <|-- RestConfiguration
BaseConfiguration <|-- UiConfiguration
UiConfiguration <|-- TerminalXConfiguration
%% Protocol Implementations
UiElement <|.. SeleniumUiElement : implements
UiElement <|.. PlaywrightUiElement : implements
UiContext <|.. SeleniumUiContext : implements
UiContext <|.. PlaywrightUiContext : implements
%% Usage Relationships
RestTests o-- RestSteps : uses
RestTests o-- RestConfiguration : configures
SeleniumTests o-- SeleniumSteps : uses
SeleniumTests o-- UiConfiguration : configures
SeleniumTests o-- SeleniumUiContext : creates
PlaywrightTests o-- PlaywrightSteps : uses
PlaywrightTests o-- UiConfiguration : configures
PlaywrightTests o-- PlaywrightUiContext : creates
TerminalXTests o-- TerminalXSteps : uses
TerminalXTests o-- TerminalXConfiguration : configures
UiSteps o-- UiContext : uses
SeleniumUiContext o-- SeleniumUiElement : returns
PlaywrightUiContext o-- PlaywrightUiElement : returns
%% Example extension note
%% You can add new technologies by subclassing the three core abstractions:
%% AbstractTestsBase, GenericSteps, and BaseConfiguration.
Key Classes
| Class | Description |
|---|---|
AbstractTestsBase |
Base for all test scenarios; holds steps and config |
GenericSteps |
Base for all step implementations; provides BDD keywords |
BaseConfiguration |
Base for all configuration objects |
RestTests |
REST-specific test base |
RestSteps |
REST-specific steps |
RestConfiguration |
REST-specific configuration |
SeleniumTests |
Selenium-specific test base |
SeleniumSteps |
Selenium-specific steps |
PlaywrightTests |
Playwright-specific test base |
PlaywrightSteps |
Playwright-specific steps |
UiConfiguration |
Shared UI configuration for both Selenium and Playwright |
TerminalXConfiguration |
Example: custom UI configuration |
Usage Examples
Gherkin to Fluent API Mapping
BDD scenarios written in Gherkin map directly to fluent API method calls:
Gherkin Scenario:
Scenario: Publish and consume message
Given a queue handler
When publishing message "test_queue"
And consuming
Then the received messages contain a message "test_queue"
Python implementation (from RabbitMqSelfTests::should_publish_and_consume)
def should_publish_and_consume(self) -> None:
(self.steps
.given.a_queue_handler(self._queue_handler)
.when.publishing([Message("test_queue")])
.and_.consuming()
.then.the_received_messages(yields_item(
tracing(is_(Message("test_queue"))))))
Key Points:
- .given → Given steps (setup/preconditions)
- .when → When steps (actions)
- .and_ → And steps (additional actions/verifications)
- .then → Then steps (verifications using hamcrest matchers)
- Method chaining enables readable, sequential flow
- Type-safe throughout (generics for domain objects)
Stateful Scenarios:
Integration scenarios are inherently stateful: the test class is responsible for managing the lifecycle of core resources (such as connections, clients, or handlers) and creates resource-specific handler objects (e.g., queue handler, topic handler, client). The steps class receives the handler via a dedicated method (e.g., a_queue_handler, a_topic_handler) and provides a fluent BDD API for all subsequent operations. This enables step chaining and ensures that asynchronous/background operations (like message consumption) are coordinated through the handler. Cleanup and teardown are managed by the test class, which ensures proper resource disposal even in partial failure states. This pattern applies to messaging (RabbitMQ, Kafka), REST sessions, browser contexts, and similar integration domains.
TerminalX Tests
@pytest.mark.external
@pytest.mark.ui
class TerminalXTests(
SeleniumTests[TerminalXSteps[TerminalXConfiguration],
TerminalXConfiguration]):
_steps_type = TerminalXSteps
_configuration = TerminalXConfiguration()
# NOTE sections may be further collected in superclasses and reused across tests
def login_section(self, user: TerminalXUser) -> Self:
(self.steps
.given.terminalx(self.ui_context)
.when.logging_in_with(user.credentials)
.then.the_user_logged_in(is_(user.name)))
return self
def should_login(self):
self.login_section(self.configuration.random_user)
def should_find(self):
(self.login_section(self.configuration.random_user)
.steps.when.clicking_search())
for word in ["hello", "kitty"]:
(self.steps
.when.searching_for(word)
.then.the_search_hints(yields_item(tracing(
contains_string_ignoring_case(word)))))
@override
def setup_method(self) -> None:
from selenium.webdriver import Firefox
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium.webdriver.firefox.service import Service as FirefoxService
from webdriver_manager.firefox import GeckoDriverManager
if self._configuration.parser.has_option("selenium", "browser_type") \
and self._configuration.parser["selenium"]["browser_type"] == "firefox":
options = FirefoxOptions()
service = FirefoxService(GeckoDriverManager().install())
self._web_driver = Firefox(options=options, service=service)
self._web_driver.set_window_size(1920, 1080) # type: ignore
else:
super().setup_method()
Browser Setup
For custom browser configuration (different browser, custom options), override setup_method() in your test class.
The base classes (SeleniumTests, PlaywrightTests) provide sensible Chrome/Chromium defaults.
The Configuration
The configuration is loaded from two sources, in this example:
TerminalXConfigurationclass looks for a matchingterminalx_configuration.inifile underconfigurations/.- pytest could be launched with a
--configparameter to override this or add properties:pytest --config selenium:browser_type=firefox qa-pytest-examples/tests/terminalx_tests.py::TerminalXTests
Any subclass of BaseConfiguration
looks for a matching ini file, this way multiple configurations can be used.
If there is a TEST_ENVIRONMENT environment variable its value will be chained
to the path of ini file, this way one can select which configuration set
shall be used at runtime.
Swagger Petstore Tests
@pytest.mark.external
class SwaggerPetstoreTests(
RestTests[SwaggerPetstoreSteps[SwaggerPetstoreConfiguration],
SwaggerPetstoreConfiguration]):
_steps_type = SwaggerPetstoreSteps
_configuration = SwaggerPetstoreConfiguration()
@pytest.mark.parametrize("pet", SwaggerPetstorePet.random(range(4)))
def should_add(self, pet: SwaggerPetstorePet):
(self.steps
.given.swagger_petstore(self.rest_session)
.when.adding(pet)
.then.the_available_pets(yields_item(tracing(is_(pet)))))
Combined Tests
@pytest.mark.external
@pytest.mark.ui
class CombinedTests(
RestTests[CombinedSteps, CombinedConfiguration],
SeleniumTests[CombinedSteps, CombinedConfiguration]):
_steps_type = CombinedSteps
_configuration = CombinedConfiguration()
def should_run_combined_tests(self):
random_pet = next(SwaggerPetstorePet.random())
random_user = random.choice(self.configuration.users)
(self.steps
.given.swagger_petstore(self.rest_session)
.when.adding(random_pet)
.then.the_available_pets(yields_item(tracing(is_(random_pet))))
.given.terminalx(self.ui_context)
.when.logging_in_with(random_user.credentials)
.then.the_user_logged_in(is_(random_user.name)))
RabbitMQ Self Tests
class RabbitMqSelfTests(
RabbitMqTests[int, str,
RabbitMqSteps[int, str, RabbitMqSelfConfiguration],
RabbitMqSelfConfiguration]):
_queue_handler: QueueHandler[int, str]
_steps_type = RabbitMqSteps
_configuration = RabbitMqSelfConfiguration()
def should_publish_and_consume(self) -> None:
(self.steps
.given.a_queue_handler(self._queue_handler)
.when.publishing([Message("test_queue")])
.and_.consuming()
.then.the_received_messages(yields_item(
tracing(is_(Message("test_queue"))))))
@override
def setup_method(self) -> None:
super().setup_method()
self._queue_handler = QueueHandler(
channel := self._connection.channel(),
queue_name=require_not_none(
channel.queue_declare(
queue=EMPTY_STRING, exclusive=True).method.queue),
indexing_by=lambda message: hash(message.content),
consuming_by=lambda bytes: bytes.decode(),
publishing_by=lambda string: string.encode())
@override
def teardown_method(self) -> None:
try:
self._queue_handler.close()
finally:
super().teardown_method()
qa_testing_utils.pytest_plugin
QA Testing Utils – Pytest Plugin
This pytest plugin provides shared testing infrastructure for Python monorepos
or standalone projects using the qa-testing-utils module.
Features
- Per-module logging configuration:
- During test session startup, the plugin searches for a
logging.inifile:- First under
tests/**/logging.ini - Then under
src/**/logging.ini - Falls back to
qa-testing-utils/src/qa_testing_utils/logging.ini
- First under
-
This enables consistent, per-module logging without requiring repeated boilerplate.
-
Source code inclusion in test reports:
-
Adds a
bodysection to each test report with the source code of the test function (viainspect.getsource()), useful for HTML/Allure/custom reporting. -
Command-line config overrides (parsed but not yet consumed):
- Adds a
--configoption that acceptssection:key=value,...strings. - Intended for runtime configuration injection (e.g., overriding .ini files or test settings).
Usage
-
Declare the plugin in your module's
pytest_plugins(if not auto-loaded via PDM entry point): pytest_plugins = ["qa_testing_utils.pytest_plugin"] -
Place a logging.ini file under your module's tests/ or src/ directory. If none is found, the fallback from qa-testing-utils will be used.
-
Run your tests, optionally with runtime overrides:
pytest --config my_section:key1=val1,key2=val2
Notes: This plugin is designed to be generic and reusable across any module consuming qa-testing-utils.
Compatible with VSCode gutter test launching and monorepo test execution.
get_config_overrides()
Returns parsed --config overrides passed to pytest.
Safe to call from anywhere (e.g., BaseConfiguration).
Source code in qa-testing-utils/src/qa_testing_utils/pytest_plugin.py
113 114 115 116 117 118 | |
pytest_addoption(parser)
Adds the --config command-line option for runtime config overrides.
Source code in qa-testing-utils/src/qa_testing_utils/pytest_plugin.py
61 62 63 64 65 66 67 68 | |
pytest_configure(config)
Configures the pytest session, loading logging.ini and parsing config overrides.
Source code in qa-testing-utils/src/qa_testing_utils/pytest_plugin.py
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | |
pytest_runtest_makereport(item, call)
Generates a test report with the source code of the test function.
Source code in qa-testing-utils/src/qa_testing_utils/pytest_plugin.py
100 101 102 103 104 105 106 107 108 109 110 | |
Configuration File Discovery Pattern
All subclasses of BaseConfiguration automatically infer their configuration file location based on the module in which the configuration class is defined—not the test file location.
Pattern:
-
The configuration file is expected at:
/configurations/${TEST_ENVIRONMENT}/ .ini -
<module_dir>: Directory where the configuration class's module is located ${TEST_ENVIRONMENT}: Optional environment subdirectory (e.g., dev, ci, prod)<module_name>.ini: The stem of the configuration class's module file (e.g.,KafkaConfiguration→kafka_configuration.ini)
Example:
If you use KafkaConfiguration from the qa-pytest-kafka module, the expected configuration file is:
qa-pytest-kafka/src/qa_pytest_kafka/configurations/kafka_configuration.ini
or, if using environments:
qa-pytest-kafka/src/qa_pytest_kafka/configurations/dev/kafka_configuration.ini
Note: - The configuration file is not inferred from the test file location. - This ensures that configuration is always colocated with the implementation module, supporting reuse and clarity across test modules.
Kafka Configuration URL Format
Kafka configuration is now defined as a single URL string in kafka_configuration.ini:
kafka://<bootstrap_servers>/<topic>?group_id=<group_id>
Rules:
- The URL is required and validated on load.
- scheme, netloc, and path must be present; otherwise a ValueError is raised.
- group_id is required in the query string; missing values raise a ValueError.
- Legacy bootstrap_servers, topic, and group_id fields are no longer used.
Kafka Module Alignment with Core Architecture
Kafka now mirrors the RabbitMQ structure:
- KafkaTests derives from AbstractTestsBase and uses _steps_type and _configuration.
- KafkaSteps accepts a TConfiguration: KafkaConfiguration type parameter to support derived configurations.
- Type parameter syntax uses Python 3.13 (PEP 695) across Kafka tests/steps.
Error and Edge Case Handling (All Integration Modules)
For all integration modules (Kafka, RabbitMQ, REST, etc.), the following principles apply to error and edge case handling:
- Propagate API exceptions or return values: If an operation fails due to a test design error, invalid input, or system state (e.g., non-existent resource, invalid partition, serialization error, duplicate key), the module should propagate the underlying API exception or return value. This ensures failures are visible and actionable.
- Fail fast: Do not attempt to recover from permanent errors (e.g., configuration mistakes, resource not found, message too large). Surface the error immediately to the test.
- No extra retrying at the BDD layer: Do not wrap operations in additional retry logic unless the error is known to be transient and the underlying client does not already handle resilience. For most messaging and database clients, resilience is built-in; retrying at the BDD layer only delays test failure.
- Descriptive errors: Where possible, ensure that errors surfaced to the test are descriptive and actionable, aiding in rapid diagnosis.
- Consistent pattern: This approach ensures that all modules behave consistently, and test failures are clear and deterministic.