Upgrade Guide
This guide provides step-by-step instructions to help you transition your Ansible files from earlier versions. Each section highlights specific configuration changes and the necessary actions to update your files manually.
Follow each section carefully to ensure your configuration reflects the latest structure and standards. Where applicable, this guide also includes example values and references to additional documentation to assist you with specific settings.
Updating Inventory File from CIVITAS/CORE v1.2.0 to v1.3
This guide provides step-by-step instructions to help you transition your CIVITAS/CORE configuration from version v1.2.0 to v1.3. It highlights important changes in the inventory file and deployment procedure, along with the necessary manual actions to update your setup. Please follow each section carefully to ensure your configuration reflects the latest structure and features.
Note: This guide addresses changes in the default_inventory.yaml
file. All settings in this file are default settings and can be overridden by user settings in your own inventory.yaml
(or the like) file. Thus, not all of the changes described in the following sections might impact your own configuration.
APISIX Upgrade Incompatibility: The API gateway (Apache APISIX) Helm chart has undergone major changes and is not directly upgrade-compatible. The Ansible upgrade will fail with errors if the existing APISIX release is not handled. You must either disable APISIX in your upgrade inventory or remove the existing APISIX Helm release before running the upgrade. Backup any custom APISIX routes you have created (e.g. via Dashboard or Admin API) before removal. These routes can be exported as JSON and later re-imported via the APISIX Admin API after the upgrade.
Keycloak Roles Notice: During the upgrade, default Keycloak roles for the admin user (e.g. realm-management
roles) will be re-added if missing. If you intentionally removed any default roles from the Keycloak admin user, be prepared to remove them again after the upgrade is complete.
Each of the following sections outlines a specific change to the inventory file or deployment procedure, followed by actions required to adapt your configuration.
1. Email Server StartTLS Setting
Change: The SMTP email configuration now includes a starttls
flag to explicitly enable StartTLS. In previous versions, SSL/TLS was implied by the port. Now, port 587 with starttls: true
is the default for secure SMTP (instead of port 465).
Action: Add or update the starttls
setting under your email server configuration. For example, if you use the default email settings:
email:
server: "smtp.yourprovider.com"
port: 587
user: "noreply@{{ DOMAIN }}"
password: "YOUR_EMAIL_PASSWORD"
email_from: "noreply@{{ DOMAIN }}"
starttls: true
If your SMTP provider requires a different setup (for instance, SSL on port 465), adjust port
and starttls
accordingly (set starttls: false
for pure SSL). This ensures the platform can send emails (e.g. Keycloak notifications) using the correct TLS settings.
2. Kubernetes Pod Security Admission
Change: You can now optionally enforce Kubernetes Pod Security Standards (podSecurityAdmission) for each namespace via the inventory. This introduces a new toggle under inv_k8s
to enable Pod Security Admission checks.
Action: If you want the cluster to enforce Pod Security Admission levels (e.g. baseline/restricted) on CIVITAS/CORE namespaces, add or update the following in your inventory:
inv_k8s:
podSecurityAdmission:
enable: false # set to true to enforce namespace-level Pod Security standards
By default this is false
(no automatic enforcement). If set to true
, CIVITAS/CORE will label namespaces to adhere to Kubernetes Pod Security standards. Ensure your workloads comply with the chosen Pod Security level before enabling.
3. Cert-Manager Issuer Configuration
Change: The cert-manager configuration now includes a flag to automatically create the Let’s Encrypt ClusterIssuer. A new field create_letsencrypt_issuer: true
is added, and the issuer names for staging/production have been renamed or restructured.
Action: Update the cert-manager settings in your inventory as follows:
inv_k8s:
cert_manager:
le_email: "admin@{{ DOMAIN }}"
issuer_name: "letsencrypt-prod"
create_letsencrypt_issuer: true # new flag to auto-create the issuer
This ensures that on deployment, a ClusterIssuer for Let’s Encrypt (production) will be created automatically (using the provided email). If you use custom issuers or certificates, you can set this to false
and manage issuers manually.
4. Cluster Policy Controller (Kyverno)
Change: A Kyverno operator has been introduced as an optional component for policy enforcement. Kyverno can validate and mutate Kubernetes resources based on policies, adding an extra layer of security and governance.
Action: If you wish to use Kyverno for custom policies, enable it in the Operation Stack section of the inventory:
inv_op_stack:
kyverno_operator:
enable: false # set true to deploy Kyverno
ns_name: "{{ ENVIRONMENT }}-operation-stack"
By default, this is disabled (false
). To activate Kyverno, set enable: true
. The namespace can remain as the default (it will install into the operation stack namespace). Ensure that your cluster has sufficient resources and that you have any Kyverno policies ready to apply (policy creation is outside the scope of the platform deployment).
5. PostgreSQL Operator – Spilo Image Configuration
Change: The PostgreSQL Operator now allows specifying a custom Spilo image for the database pods. In v1.3, the default has been updated to use the Spilo 15 image (for PostgreSQL 15). A new spilo_postgres_image
section is added to the inventory to control the registry, repository, and tag of the Spilo image.
Action: Add the new Spilo image configuration under inv_central_db.spilo_postgres_image
:
inv_central_db:
spilo_postgres_image:
image_registry: "ghcr.io"
image_repository: "zalando/spilo-15"
image_tag: "3.2-p1"
These default values point to Zalando’s Spilo image for PostgreSQL 15. If you use the built-in Postgres Operator, ensure you update this to a PostgreSQL 15–compatible image (as shown above). If you had overridden the Postgres image previously or are using a central external database, adjust accordingly. This change accompanies the move from PostgreSQL 14 to 15 (which was introduced in v1.2) and ensures the operator uses the correct image.
6. Keycloak Configuration Restructuring
Change: The inventory structure for Keycloak (the Identity Management component) has been reworked to support multi-tenant scenarios. Some keys have moved or changed:
realm_name
,scope
, and theenable_events
flags, which were previously underinv_access.keycloak
, are now at the top level ofinv_access
(or under a newtenant
context, explained below).k8s_secret_name
(for the Keycloak admin credentials secret) has moved from thekeycloak
section into theplatform
section.- A new field
hostname
is added underinv_access.platform
to explicitly set the Keycloak base URL (e.g."https://idm.{{ DOMAIN }}"
by default). - The Keycloak theme configuration has been simplified (the
theme_url
option is no longer in the default inventory, and the theme is set to"keycloak"
by default).
Action: Modify your inventory as follows to match the new structure:
-
Under your Access Management stack (
inv_access
): ensure theplatform
section contains the Keycloak admin secret name and hostname:inv_access:
platform:
admin_email: "admin@{{ DOMAIN }}"
master_username: "admin@{{ DOMAIN }}"
master_password: "<your-admin-password>"
k8s_secret_name: "{{ ENVIRONMENT }}-keycloak-admin" # moved here
hostname: "https://idm.{{ DOMAIN }}" # new field for Keycloak URL -
Move the realm and event settings out of the old
keycloak
section. For example:realm_name: "{{ ENVIRONMENT }}"
scope: "openid"
enable_events: true
enable_adminEvents: trueThese should be directly under
inv_access
(at the same level asplatform
andkeycloak
in the YAML hierarchy). -
In the
keycloak
sub-section (now primarily for Keycloak deployment settings), keepenable: true
(if you deploy Keycloak as part of the platform), and update any other settings that remain (likelog_level
,replicas
, etc.). Thetheme
can remain"keycloak"
unless you use a custom theme (if you had settheme_url
previously, note that this option is no longer in the default schema; custom theme deployment may require a different approach or manual step).
After these changes, your inv_access
section will look roughly like:
inv_access:
platform:
admin_email: "admin@{{ DOMAIN }}"
master_username: "admin@{{ DOMAIN }}"
master_password: "..."
k8s_secret_name: "{{ ENVIRONMENT }}-keycloak-admin"
hostname: "https://idm.{{ DOMAIN }}"
tenant:
... (if multi-tenant, see next section)
realm_name: "{{ ENVIRONMENT }}"
scope: "openid"
enable_events: true
enable_adminEvents: true
keycloak:
enable: true
log_level: "INFO"
replicas: 1
enable_logical_backup: false
theme: "keycloak"
Make sure to remove any deprecated keys (for example, inv_access.base_domain
was removed in an earlier version; ensure you’re not using it). The above structure ensures your Keycloak admin credentials and realm configuration align with v1.3 expectations.
7. (Optional) Multi-Tenant Platform Support
Change: CIVITAS/CORE v1.3 introduces basic multi-tenant management. You can now install a shared “base” platform and then onboard multiple tenants on the same cluster. Each tenant corresponds to a separate Keycloak realm and its own set of services, while core components are shared. This is a significant new feature but optional – if you continue with a single-tenant deployment, you can ignore the multi-tenant setup (just ensure the inventory changes from the previous section are applied).
For those who plan to use multi-tenancy, the deployment process and inventory are structured with new Ansible tags:
- The
base
installation (shared components: Keycloak, PostgreSQL Operator, central database, Kyverno, Keel, Monitoring, etc.). - One or more
tenant
installations (each deploying tenant-specific components and creating a realm in the shared Keycloak).
Action: If you intend to leverage multi-tenancy, follow these steps:
-
Split your inventory into a base inventory and one (or more) tenant inventory files:
- The base inventory should have
inv_access.keycloak.enable: true
(to deploy the shared Keycloak) and common settings. Omit any tenant-specific overrides here. - Each tenant inventory should set
inv_access.keycloak.enable: false
(to avoid redeploying Keycloak for each tenant) and include tenant-specific values (see below).
- The base inventory should have
-
Add the new
tenant
section in each tenant’s inventory. This section defines the initial admin user for that tenant’s realm and whether to use an external IDM:inv_access:
tenant:
configure_external_idm: false # false = use the platform’s Keycloak
tenant_first_name: "Admin"
tenant_surname: "Tenant"
tenant_email: "tenantadmin@{{ DOMAIN }}"
tenant_username: "tenantadmin@{{ DOMAIN }}"
tenant_password: "CHANGE_ME"
realm_name: "{{ ENVIRONMENT }}"
scope: "openid"
enable_events: true
enable_adminEvents: trueSet
configure_external_idm: false
if the tenant will use the shared Keycloak (most cases). Only set it totrue
if you have a special scenario with an external identity provider. The other fields define the tenant realm’s admin user credentials (this user will be created in Keycloak for that realm). -
In tenant inventories, ensure
realm_name
is set to a unique realm name for that tenant (by default it can still use{{ ENVIRONMENT }}
, but if you run multiple tenants in one cluster, they must each have a distinct realm name). -
Run the Ansible playbook in two stages:
- Base installation:
ansible-playbook -i base_inventory.yml -l localhost core_platform/playbook.yml -t base
This installs the shared base components (it will set up Keycloak, database operator, etc., but skip tenant-specific ones). - Tenant installation:
ansible-playbook -i tenant1_inventory.yml -l localhost core_platform/playbook.yml -t tenant
(repeat for each tenant inventory, e.g., tenant2, tenant3, etc.) This will install the resources for that tenant (e.g., create the tenant’s realm in Keycloak, deploy any dedicated services for the tenant).
- Base installation:
Each tenant will use the base installation’s Keycloak (as a new realm) and shared infrastructure. This approach significantly reduces resource overhead for multiple tenants. If you do not need multi-tenancy, you can continue with the standard single-tenant deployment (do not use the base
or tenant
tags). In that case, just ensure the inventory has tenant.configure_external_idm: false
and includes the tenant admin fields (they will be ignored in single-tenant mode but satisfy the schema).
8. APISIX Data Plane Mode Configuration
Change: The APISIX API Gateway can now be deployed in a hybrid mode separating control plane and data plane. In the inventory, a new flag inv_access.apisix.data_plane.enable
under the Access stack’s database section controls whether an APISIX data-plane instance is deployed. In a single-tenant deployment, this remains true
(default) so that APISIX is installed normally. In a multi-tenant scenario, you might deploy a central APISIX control plane (Admin API and etcd) in the base installation and then run only data-plane instances per tenant.
Action: By default, the inventory sets inv_access.apisix.data_plane.enable: true
for the Access stack’s gateway database component. This ensures APISIX is deployed as usual. If you are splitting base and tenants:
- In the base inventory, you may set
inv_access.apisix.data_plane.enable: false
(to deploy APISIX’s etcd and control components without a proxy instance). - In each tenant inventory, ensure
inv_access.apisix.data_plane.enable: true
so that an APISIX proxy instance (data plane) is deployed for that tenant.
The relevant section in inventory is under inv_access
(commonly labeled database
in the default inventory, referring to the API gateway/management database and components):
inv_access:
apisix:
data_plane:
enable: true
For a standard single deployment, leave this as true
. If modifying, adjust the values in base/tenant as described. Note: The internal naming of this section as "database" is historical (covering API gateway resources including its etcd storage). The key inv_access.apisix.data_plane.enable
simply gives more flexibility in multi-instance deployments.
9. APISIX Dashboard (GUI) Optional Deployment
Change: CIVITAS/CORE now allows enabling or disabling the APISIX Dashboard, which is a web UI for managing the API gateway. Previously, if bundled, it might have been enabled by default; in v1.3 it is disabled by default for security (the Dashboard exposes control of the gateway). You can enable it through inventory settings and provide credentials.
The APISIX Dashboard is not maintained any more. It does not support all features of APISIX and is not recommended for production use.
Action: If you want to use the APISIX Dashboard UI to manage routes and plugins in APISIX via a web interface, update the inventory to enable it and set a secure password:
inv_access:
database:
# ... other settings ...
dashboard:
enable: true
jwt_secret: "<some_long_random_string>"
admin:
username: "admin" # Dashboard login username
password: "CHANGE_ME_SECURELY" # Dashboard login password
Make sure to replace jwt_secret
with a randomly generated secret (the JWT token signing key for the dashboard) and username/password
with the desired admin credentials for the Dashboard. If you do not need the APISIX Dashboard, keep enable: false
(or simply leave it as default). When disabled, the Dashboard will not be deployed, reducing the attack surface of your platform.
10. OIDC Authority URL Configuration
Change: The OpenID Connect authority URL (used for OIDC login flows, e.g., in the Service Portal or other OIDC-enabled components) is now derived from the new inv_access.platform.hostname
variable. In v1.2, it was often a hard-coded "https://idm.{{ DOMAIN }}/auth/realms/{{ ENVIRONMENT }}"
. In v1.3, the inventory uses a template that references the platform.hostname
for flexibility (especially useful if using an external IDM or a non-default host).
Action: No action is required if you use the default naming (idm subdomain) and have updated the platform.hostname
as described in section 6. The OIDC endpoints will automatically use "{{ inv_access.platform.hostname }}/auth/realms/{{ ENVIRONMENT }}"
. However, if you had a custom OIDC authority configured, ensure it aligns with the new pattern:
oidc_authority: "{{ inv_access.platform.hostname | default(inv_access_platform_hostname) }}/auth/realms/{{ ENVIRONMENT }}"
In simpler terms, just verify that your inv_access.platform.hostname
is set to the correct Keycloak base URL (commonly https://idm.your-domain
). The playbook will use that to form OIDC URLs for components like the Service Portal.
11. MasterPortal and Map Components Version Update
Change: The Geoportal (MasterPortal) front-end and related components have new versions. In the default inventory, the image tags for these components have been updated from v1.2.0
to the latest development tag (currently "main"
for the devel branch, which corresponds to v1.3). This affects:
masterportal_1
(the main geo-portal UI)masterportal_2
(if a second instance is defined)mapfish
(MapFish print service)portal_backend
(the backend service supporting authentication for MasterPortal)
Action: Update the image tags for the above components in your inventory to match the v1.3 release. For example:
inv_cm:
masterportal_1:
image_tag: "v1.3.0" # or "main" if you use the development tag
masterportal_2:
image_tag: "v1.3.0"
mapfish:
image_tag: "v1.3.0"
portal_backend:
image_tag: "v1.3.0"
Using the official release tag (e.g., v1.3.0
once available) is recommended for stability. If you leave these at v1.2.0
, you will be running an older version of the geo-portal components which may not be fully compatible with the upgraded platform. Thus, ensure these are bumped to the version corresponding to the platform upgrade.
12. GIS Database – Multiple Databases Support
Change: The GIS database configuration (inv_cm.gisdb
) now supports defining additional databases. In v1.2, typically one PostGIS database was used for all geo-data. In v1.3, an additionalDatabases
list can be specified, allowing the playbook to create multiple databases in the PostGIS cluster. This is useful for separating data contexts or tenants (if needed) at the database level.
Action: This change is optional. If you require multiple PostGIS databases, add them in the inventory:
inv_cm:
gisdb:
enable: true
# ... other settings (host, storageSize, etc.) ...
additionalDatabases:
- "extra_db1"
- "extra_db2"
Each name in the list will result in a new database (with that name) being created in the GIS Postgres cluster. If you do not need additional databases, you can leave additionalDatabases: []
or omit it (the default behavior remains a single database as before).
Remember that if you add databases here and you use GeoServer, you might need to configure GeoServer to connect to them (see JNDI connections below).
13. GeoServer Configuration Enhancements
Change: The GeoServer component has a number of new configuration options in v1.3 to improve flexibility:
- Memory settings: You can now adjust the Java memory parameters for GeoServer via
memoryInitial
andmemoryMaximum
(e.g., allocate more RAM if needed for large datasets). - JNDI database connections: There is a toggle
enableJNDI
(defaultfalse
). When enabled, GeoServer can use JNDI to connect to external databases. You can list such data sources inadditionalJNDIConnections
(each entry likely containing the JNDI name and DB details, as per GeoServer configs). - Custom CA Certificate: A new
customCACert
field allows specifying a path to a custom CA certificate to be imported into GeoServer's trust store. This is useful if GeoServer needs to connect to services (or S3 endpoints) with self-signed or private CA certs. - Selective Extension Installation: The inventory now provides lists for stable and community extensions to install. For example, in
stableExtensions
andcommunityExtensions
, you can list desired GeoServer plugins. By default, some common ones are included (such asimporter-plugin
,csw-iso-plugin
,web-resource-plugin
,metadata-plugin
in stable; andcog-s3-plugin
,sec-keycloak-plugin
in community). Notably, the Keycloak security plugin (sec-keycloak-plugin
) is now available to integrate GeoServer with Keycloak auth if needed. - S3 BlobStore settings: Under
s3access
, you can configure GeoServer to use an S3 bucket for data store. You can set the S3region
,endpoint
, and credentials (user
,password
) if using this feature. - migrateDataDir: It is necessary to migrate the data directory because the storage of data within the Kubernetes volume has been adjusted, which means that the previous data and configurations are located in the wrong folder. If this option is activated (this is the case by default) then an automatic migration of the data is carried out and nothing needs to be adjusted.
Action: Review your inv_cm.geoserver
section and update or add these new options as needed:
inv_cm:
geoserver:
enable: true
# ... existing settings like host, geoserverUser/Password ...
dataDirSize: 25Gi
cacheDirSize: 25Gi
memoryInitial: 4G # Set initial JVM heap (e.g., 4G)
memoryMaximum: 8G # Set max JVM heap (e.g., 8G)
enableJNDI: false
additionalJNDIConnections: [] # e.g., list your JNDI DB names if needed
customCACert: "" # e.g., "/path/to/ca.crt" if needed
stableExtensions:
- importer-plugin
- csw-iso-plugin
- web-resource-plugin
- metadata-plugin
communityExtensions:
- cog-s3-plugin
- sec-keycloak-plugin
s3access:
region: "" # fill in if using S3 for data
user: "" # optional, leave blank if using instance roles
password: "" # optional
endpoint: "" # e.g., "https://s3.amazonaws.com" or a custom endpoint
migrateDataDir: true
For most upgrades, if you weren’t using these features, you can accept the defaults (which effectively keep behavior the same as v1.2). If you plan to use the Keycloak plugin for GeoServer, ensure enableJNDI: true
and the Keycloak extension in communityExtensions, and follow GeoServer documentation for configuring the JNDI connection to the Keycloak database or other needed settings. If using S3 for data, fill in the s3access
details and ensure the cog-s3-plugin
extension is listed (as in the default). Adjust memory settings if your GeoServer usage requires more memory than the default.
14. Harbor Image Registry Prefill Updates
Change: The optional Harbor prefill configuration (for replicating images to a private Harbor registry) has two new settings in v1.3:
- A
cron_schedule
for controlling the frequency of image synchronization. - A
project_name
to specify which Harbor project to use for the images. Additionally, thebase_url
now expects the full URL including the/api/v2.0
path of your Harbor instance.
Action: If you are using a custom Harbor registry to pre-populate images, update the inv_infra.harbor
section:
inv_infra:
harbor:
enable: true # or false if not using Harbor
base_url: "https://<your-harbor-domain>/api/v2.0"
username: "<harbor_username>"
password: "<harbor_password>"
project_name: "civitas_core" # Harbor project where images will be pushed
cron_schedule: "0 0 0 * * */2"
debug: false
The above cron_schedule
is the default (it runs the prefill every 2 days). Adjust the schedule as needed (cron format) or set enable: false
to disable automatic image replication. Ensure the project_name
exists in your Harbor or will be created, and that the credentials have permission to push to it. The base_url
must include the api/v2.0
endpoint as shown (the default was adjusted accordingly).
15. New Feature: MQTT over TLS for FROST (SensorThings)
Change: The platform now supports secure MQTT and WebSocket access to the FROST (SensorThings API) service through APISIX.
Action: To enable MQTT and WebSocket support, configure the following in your inventory:
inv_access:
apisix:
ingress_controller:
enable: true
data_plane:
enable: true
service_type: "LoadBalancer"
mqtt_service_port: 8883
mqtt_websocket_port: 9876
inv_cm:
frost:
mqtt:
enable: true
The section inv_access.apisix.stream
can be removed from your inventory, if it exists.
To disable either port, set its value to 0
.
Note: Full details are available in the MQTT/WSS documentation.
By following the above changes and actions, you should be able to successfully migrate your CIVITAS/CORE platform from v1.2.0 to v1.3. Always review these changes in a staging environment if possible, and consult the official documentation for any clarifications. After upgrading, test the key functionalities (especially authentication flows, data ingestion, and any custom routes or integrations like MQTT) to ensure everything works with the new version. If you encounter issues or missing steps, please reach out to the CIVITAS/CORE community for support.
Updating Inventory File from CIVITAS/CORE v1.1 to v1.2
This section will guide you through the necessary updates for transitioning your Ansible inventory file from version v1.1 to v1.2. Each section outlines specific changes along with instructions for manually updating your inventory file.
Unfortunately, APISIX is not update compatible with the current helm chart used. The upgrade will fail if you run ansible with the message Forbidden: updates to statefulset spec for fields ...
. You can either disable APISIX in your upgrade inventory, or remove the helm release from your cluster.
Please BACKUP your manually created routes before running the upgrade, if any.
Important Note on PV/PVC Cleanup: When removing the APISIX Helm release, the Persistent Volume (PV) and Persistent Volume Claim (PVC) will remain in the cluster and contain old data. During the upgrade, the old PV gets recycled with existing data, which prevents APISIX routes defined in the playbook from being properly recreated. To ensure routes are correctly updated with changes from version 1.1 to 1.2, you must not only delete the APISIX Helm release but also delete the associated PV and PVC afterwards.
Default keycloak roles for the admin user will be added if not present any more. If you removed the roles on purpose, please make sure you remove them again after upgrading.
1. Configurable Database Ports
Change: All database ports can now be configured individually if required.
Action:
Add or update the following section in your inventory (optional, default is 5432
):
inv_mngd_db:
keycloak_postgres:
port: "5432" # Specify the desired port for Keycloak PostgreSQL
2. Private Registry Configuration
Change:
Support for configuring a private registry for all platform container images has been added. If all the platform container images are available in a private registry, this can be configured here Naming concept resembles the harbor configuration, i.e. <project_name>/<original_docker_io_path>
.
Action: Add or update the following configuration:
inv_op_stack:
private_registry:
enable: false # Enable if using a private registry
# registry_username: ""
# registry_password: ""
# registry_url: ""
# registry_full_url: ""
# registry_user_email: ""
3. Default Service Configuration
Change:
All services are now disabled by default (enable
is almost always set to false
).
Action: Review and adjust service enablement settings as needed.
4. Dataspace Configuration
Change:
Dataspaces can now be configured/created via the inventory file. additional_dataspaces
now supports to create multiple dataspaces. It is a list of dataspace names to be added to frost and stellio. All of these dataspaces will be added as private spaces.
Action: Add or update the following sections:
inv_access:
open_data_dataspace: ds_open_data # Default name of open data space (public access)
additional_dataspaces:
- "additional_data_space1"
- "additional_data_space2"
5. Removal of inv_access.group
Change:
The inv_access.group
key has been removed. Use inv_access.open_data_dataspace
instead.
Action:
Remove inv_access.group
and ensure open_data_dataspace
is configured appropriately.
6. Service Portal Configuration
Change: A service portal can now be configured as the entry page for the platform.
Action: Add the following configuration:
inv_access:
service_portal:
enable: false # Enable the service portal
ns_create: true
ns_name: "{{ ENVIRONMENT }}-access-stack"
ns_kubeconfig: "{{ kubeconfig_file }}"
replicas: 1
# Set the pathes to the certificate, the CRL and the CRL in PEM format
# Only required for self-signed certificates (e.g. for local development)
# If set, they will be offered for download on the portal page
certs:
enable: false
civitas_crt: "/workspaces/civitas-core/local_deployment/.ssl/civitas.crt"
civitas_crl: "/workspaces/civitas-core/local_deployment/.ssl/civitas.crl"
civitas_crl_pem: "/workspaces/civitas-core/local_deployment/.ssl/civitas.crl.pem"
oidc:
# If enabled, the user is logged in when accessing the portal
enable: false
oidc_authority: "https://idm.{{ DOMAIN }}/auth/realms/{{ ENVIRONMENT }}"
oidc_client_id: "api-access"
oidc_admin_role: ""
oidc_user_role: ""
apps:
portainer: false # Enable if you installed a portainer instances, e.g. via local deployment
7. Frost MQTT Session Affinity
Change:
Support for configuring session affinity in Frost MQTT has been added. If you use WSL on Windows, set this to None
.
Action: Add or update the following configuration:
inv_cm:
frost:
mqtt:
session_affinity: "ClientIP" # Set to None for local deployment on WSL2
8. Custom Harbor Instance Configuration
Change: Support for configuring a custom Harbor instance for container image replication. With this configuration, the images are replicated to the custom Harbor instance.
Action: Add or update the following configuration:
inv_infra:
harbor:
enable: false # Enable if using a custom Harbor instance
username: ""
password: ""
debug: false
base_url: ""
If you identify any additional necessary changes, please let us know.
Updating Inventory File from CIVITAS/CORE v1.0 to v1.1
This section will walk you through the necessary updates for transitioning your Ansible inventory file from version v1.0 to v1.1. Each section highlights specific changes along with guidance for manually updating your inventory file.
1. Ingress Configuration for local deployment and custom SSL certificates
Change:
A new ingress
section has been added to support local deployment of the platform. The ca_path
value is set to the path to your local self-signed CA certificate (see local deployment guide for details). The http
value is set to true
if you want to allow HTTP access (without SSL).
Action: Add or update the following sections:
inv_k8s:
ingress:
ca_path: "" # Path to your local self-signed CA certificate
http: false # Allow HTTP access (without SSL)
2. Prometheus and Grafana Storage Adjustments
Change:
Prometheus can be disabled via the inv_op_stack.prometheus
section.
Action: Add or update these values:
inv_op_stack:
prometheus:
enable: false # Prometheus can be disabled
3. PostgreSQL Version Upgrade
Change:
PostgreSQL version is updated from 14
to 15
.
Action:
For all PostgreSQL configurations, update pg_version
to 15
:
pg_version: "15" # PostgreSQL version
4. Inventory Access and IDM Configurations
Change:
inv_access.base_domain
has been removed.
Action:
Remove base_domain
and update values as follows:
5. APISIX and Mimir Enhancements
Change:
Added ingress_controller
configuration within apisix
. This allows to enable or disable the Ingress controller. For a standard CIVITAS deployment, this can be set to false
. This allows to run multiple APISIX instances on the same cluster.
Action:
Add the ingress_controller
section within apisix
:
inv_access:
apisix:
ingress_controller:
enable: false # Enable or disable the Ingress controller
6. Typo in Superset variable
Change:
A typo was fixed in the variable redis_auth_password
for Superset. In existing inventories this typo must be fixed, too.
Action:
Change the variable redis_auth_pasword
in section superset
:
inv_da:
superset:
redis_auth_password: "yoursecretgeneratedpassword"
7. APISix Configuration
Change:
Two new configuration options have been added to APISix: enableIPv6
and volumePermissions
to provide more control over network configuration and permissions.
- enableIPv6: Controls whether APISix listens on IPv6 addresses. Enabled (
true
) by default, it can be set tofalse
if IPv6 support is not required. - volumePermissions: Manages permissions for the etcd volume. If you encounter permission issues (e.g., on Minikube), set this to
true
to ensure correct permissions.
Action: Add or update the following configuration within your inventory:
inv_access:
apisix:
enableIPv6: true # Enable or disable IPv6 for APISix
etcd:
volumePermissions: false # Set to true if permission issues arise in your environment
8. Global Proxy Configuration
Change: Global proxy configuration support has been introduced to ensure that containers and pods can connect to external services using the appropriate proxy settings.
Action: Add the following proxy configuration to your inventory file under the general settings:
# Proxy environment variables for accessing external resources
inv_proxy:
http_proxy: "http://user:password@proxy:3128"
https_proxy: "http://user:password@proxy:3128"
no_proxy: "localhost,127.0.0.1,10.1.0.0/16,10.152.183.0/24"
9. Manual cleanup of redirect URLs in geostack client
Change:
A new installation of the V1.1 Platform has a reduced number of redirect URLs in the geostack client in keycloak. All former URLs containing .pre.{{DOMAIN}}
are not needed anymore.
Action:
To avoid unwanted deletions of URLs, we decided not to automate the deletion. So please login to keycloak, goto the realm, which was created for your platform and open the geostack
client. On the main page, the list of redirect URLs is shown. You can delete all URLs which contain .pre.{{DOMAIN}}
in the name.
Be careful if you entered additional URLs for other use cases - delete only the pre
ones.
The standard platform only needs these:
"https://oauth.geoportal.{{ DOMAIN }}/*",
"https://geoportal.{{ DOMAIN }}/*"
If you find any additional necessary changes, please let us know.