Data Migration in ADF 1.8.2
20/04/2026
This section covers the procedure of migrating ADF data in ADF 1.8.2. If you are performing a fresh installation, refer to ADF Installation & Upgrade.
Before the data migration begins ensure the Graph Modeling (formerly PoolParty) 10 and ADF Docker infrastructures are ready.
Install Graph Modeling via Docker. Refer to the official Graph Modeling Installation Guide for the base environment setup.PoolParty 10 Installation Guide
Integrate the ADF service into your Graph Modeling deployment:
Use
addons.yamlto define the ADF service. Refer to the Docker compose file for PoolParty for more information.Before starting the services, copy the nginx configuration files from
files/nginx/addonstofiles/nginx/includes/extra_includes. These files will expose the service through nginx on their respective context paths.
Verify that the ADF container can communicate with the new Graph Modeling instance.
If you used default variables in the older ADF version, they are likely already present in the main Graph Modeling
.envfile. To confirm this, review the environment section for the ADF service within theaddons.yamlfile.If you had custom settings, compare your new
.envwith the oldadf.propertiesfile.
Now that the environment and the infrastructure have been set up, we can migrate the ADF data to the Dockerized environment. There are two options to do this:
This option allows the container to read the .db file directly from a specific folder on your host machine.
Move your original
.dbdatabase file to a new host directory (for example,./adf_data).Rename the database file, for example to
adf.db.Update
addons.yamlto map the volume:volumes: - type: bind source: ./adf_data target: /var/lib/adf
Use this to "inject" your old database into a clean Docker-managed volume.
Stop and remove the existing ADF container:
docker compose -f docker-compose.yaml -f addons.yaml stop adf docker compose -f docker-compose.yaml -f addons.yaml rm -f adf
Populate the volume using a temporary container (adjust paths as necessary):
docker run --rm -u 0 \ -v pp10_adf_data:/to \ -v /path/to/old/adf.db:/from_db:ro \ alpine sh -c 'cp -a /from_db /to/adf.db; chown -R 1001:1001 /to; chmod -R u+rwX,g+rwX,o+rX /to'
Verify the migration:
docker run --rm -v pp10_adf_data:/to alpine ls -la /to
Important
If you encounter write errors, ensure the container user matches the volume ownership. Using chown -R 1001:1001 (change owner -Recursive) generally resolves permission mismatches for the standard ADF image.
Start the service:
docker compose -f docker-compose.yaml -f addons.yaml up -d adf
Run the following to ensure the database is mounted correctly and permissions are applied:
docker compose -f docker-compose.yaml -f addons.yaml exec adf sh -lc 'id; ls -la /var/lib/adf'
Test your migration by navigating to
http://<your-server>/ADF. You should be redirected to the Keycloak login page. Log in and verify that all legacy ADF data, projects, and settings have successfully transitioned.