API & UI Tests
Documentation for API & UI (also referred to End-to-End) Tests of the CIVITAS/CORE platform.
Tests are located in the core repository in the tests folder.
The tests are designed to validate that the platform components are correctly configured and integrated — across environments ranging from development to production.
Tests that are not safe to run in production (e.g. because they create or modify resources) can be excluded easily via command-line flags. Cleanup logic is implemented to ensure test isolation and prevent side effects in shared environments.
Tests are implemented using pytest and Playwright, providing a flexible and powerful foundation for API and UI testing. The chosen stack and its rationale are documented in this issue.
Requirements
- Language: Python >= 3.12
- Dependency Tracking: uv
- (Optional) Git Hooks: pre-commit
Setup
- In the tests folder
cd tests - Create a virtualenv with
uv sync - Activate venv
source .venv/bin/activate - Install playwright requirements
playwright install - (Optional but strongly recommended) Install pre-commit hooks
pre-commit install- They ensure that all changed files are always formatted
- Avoids changes just because of different IDE or system settings
- Create a file
tests/.envbased on the filetests/.example.envand adjust the values - (Optional) for platforms in which the GeoServer is not already configured. e.g. in a CI/CD Pipeline or a newly installed platform
- if you want to create dataspaces and a test db automatically set all values for
GEOSTACK_...andCREATE_... - if you want to create a test schema in a database, do a port forward to the database
- run
pytest --only-geoserver-setup tests/e2e_tests
- if you want to create dataspaces and a test db automatically set all values for
If you don't want to or have difficulties using uv, you can install the required packages from requirements.txt.
The file can be updated with uv export --format requirements-txt -o requirements.txt.
Note: GeoServer Configuration
The tests also contain functions to configure the GeoServer in tests/e2e_tests/test_geoserver_setup.py.
They leverage existing utilities from the testing framework and are kept here temporarily until they are integrated into the Ansible playbook (see: GitLab Issue).
They are only executed when run with --only-geoserver-setup and are idempotent.
Running Tests
End-to-End Tests
cd e2e_tests- Run all tests:
pytest . - Run tests for a component:
pytest --component apisix .- Runs all tests, in which the specified component(s) are included/used in some way
- Run tests for multiple components:
pytest --component apisix --component keycloak . - Run tests which are safe to run in production environments:
pytest --prod-safe . - Run only API/UI tests:
pytest --test-type apiorpytest --test-type ui - Update testcases (automatically included in pre-commit hook):
pytest --update-testcases .
Pytest CLI Options
The following command-line options are available for test selection:
--update-testcases: Regenerate the documentation file based on test metadata and docstrings--prod-safe: Run only tests marked asprod_safe--component <component>: Filter tests by component (can be specified multiple times)--test-type <type>: Filter tests by test type (e.g.,api,ui; can be specified multiple times)--only-geoserver-setup: Run only the setup logic for GeoServer-related tests--headed: Run playwright UI tests with a visible browser window. Helps debugging.
Test Documentation
All already implemented testcases can be seen in tests/e2e_tests/testcases.md.
This is automatically updated when running pytest --update-testcases . or when using pre-commit.
Writing Tests
End-to-End Tests
End-to-end tests validate the integration of system components under real-world conditions. The test suite is structured to ensure maintainability, parallelizability, and traceability.
File Organization
- Each component must have its own test file:
- Test cases:
test_<component>.py - Fixtures:
fixtures/fixtures_<component>.py
- Test cases:
Test Annotation
Use the @cc_test decorator to annotate tests with metadata:
test_id: Unique identifier (e.g.,GM-01)title: Short description of the testcomponents: List of components involved (fromComponentenum)test_type: Type of test (fromTestTypeenum, e.g.,UI,API)prod_safe: Boolean flag indicating if the test can run in production environments (e.g., platforms which are actually used and not just temporary).
This metadata enables:
- Filtering test runs using pytest CLI options
- Automatic generation of documentation of all testcases
Example:
@cc_test(
test_id="GM-01",
title="User can login to grafana monitoring",
components=[Component.KEYCLOAK, Component.GRAFANA_MONITORING],
test_type=TestType.UI,
prod_safe=True,
)
def test_grafana_monitoring_login(...):
"""
given: A test user with grafanaAdmin role
when: User logs in to Grafana via OAuth
then: The Kubernetes / API server dashboard should be visible
"""
Test Design Guidelines
- Setup and Teardown: Use
contextlib.ExitStackfor complex resource cleanup - Production Safety: Mark tests as
prod_safe=Trueonly if they are non-destructive and the platform state is unchanged. Temporary data can be created during the test, but must be cleaned up afterward. - Idempotency: Ensure tests can be run multiple times without side effects
- Parallelization: Design tests to be safely executable in parallel test runs (e.g., isolated data and resources)
Example fixture using ExitStack:
@pytest.fixture
def stellio_api_factory(...):
with ExitStack() as stack:
def inner(user: str, scope: str):
...
stack.callback(request_context.dispose)
return request_context
yield inner