You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to create a container image to execute CI jobs in Kubernetes (GitHub ARC).
However, I am facing issues when trying to build an application using Testcontainers.
I tried my best to come up with a reproducible example.
Initial version: https://github.com/jntille/arc-runner/tree/main/images/ubuntu-podman
podman build -t arc-runner:0.1 -f ./Dockerfile
podman run -it arc-runner:0.1 /bin/bash
# launch podman service
./pre-start.sh
The cli seems to work as expected. For example, docker login --verbose docker.io shows that podman login works and the auth file is located at $XDG_RUNTIME_DIR/containers/auth.json.
Similarly, docker pull --log-level debug docker.io/mongo:5 and docker pull mongo:5 are able to pull images from the registry.
Communication with the podman service / api via the unix podman.sock also appears to work.
Looking at the logs we see that testcontainers was able to discover the podman socket and recognizes the env vars set in the dockerfile.
11:07:45.081 [Test worker] INFO org.testcontainers.dockerclient.DockerClientProviderStrategy - Found Docker environment with Environment variables, system properties and defaults. Resolved dockerHost=unix:///run/user/1001/podman/podman.sock
11:07:45.084 [Test worker] INFO org.testcontainers.DockerClientFactory - Docker host IP address is localhost
11:07:45.314 [Test worker] INFO org.testcontainers.DockerClientFactory - Connected to docker:
Server Version: 4.9.3
API Version: 1.41
Operating System: ubuntu
Total Memory: 15620 MB
...
11:07:45.081 [Test worker] INFO org.testcontainers.dockerclient.DockerClientProviderStrategy - Found Docker environment with Environment variables, system properties and defaults. Resolved dockerHost=unix:///run/user/1001/podman/podman.sock
11:07:45.321 [Test worker] WARN org.testcontainers.utility.ResourceReaper - -Reference-Id=0xc000128000
********************************************************************************
Ryuk has been disabled. This can cause unexpected behavior in your environment.
********************************************************************************
11:07:45.328 [Test worker] INFO org.testcontainers.DockerClientFactory - Checking the system...
11:07:45.330 [Test worker] INFO org.testcontainers.DockerClientFactory - ?? Docker server version should be at least 1.6.0
Podman uses $XDG_RUNTIME_DIR/containers/auth.json for credentials, but Testcontainers looks in the Docker default location.
This can be silenced via ln -sf ${XDG_RUNTIME_DIR}/containers/auth.json ~/.docker/config.json
org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: postgres:latest, configFile: /home/runner/.docker/config.json, configEnv: DOCKER_AUTH_CONFIG).
Testcontainers appears to be able to pull the image defined in the Java application, as podman images shows it inside the local registry.
It looks like it was able to start the container, as we can see logs coming from postgres.
But then something happens that causes testkube to fail and the container to terminate.
Trying to pull docker.io/library/postgres:16-alpine...
Getting image source signatures
Copying blob 9824c27679d3 done |
...
11:08:13.220 [Test worker] ERROR tc.postgres:16-alpine - Could not start container
org.testcontainers.shaded.org.awaitility.core.ConditionTimeoutException: org.testcontainers.containers.GenericContainer expected the predicate to return <true> but it returned <false> for input of <InspectContainerResponse(
...
11:08:13.264 [Test worker] ERROR tc.postgres:16-alpine - Log output from the failed container:
The files belonging to this database system will be owned by user "postgres".
...
CustomerControllerTest > initializationError FAILED
org.testcontainers.containers.ContainerLaunchException: Container startup failed for image postgres:16-alpine
at app//org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:359)
at app//org.testcontainers.containers.GenericContainer.start(GenericContainer.java:330)
at app//com.testcontainers.demo.CustomerControllerTest.beforeAll(CustomerControllerTest.java:32)
Caused by:
org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
at app//org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
at app//org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:344)
... 2 more
Caused by:
org.testcontainers.containers.ContainerLaunchException: Could not create/start container
at app//org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:563)
at app//org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:354)
at app//org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
... 3 more
Caused by:
org.testcontainers.shaded.org.awaitility.core.ConditionTimeoutException: org.testcontainers.containers.GenericContainer expected the predicate to return <true> but it returned <false> for input of <InspectContainerResponse
Watching podman events confirms that the container is killed:
UTC image pull 3b527b2c105a6f2d89fcb81fd6c99a09df64651f99cb4a727282ee57dced5e5c docker.io/library/postgres:16-alpine
UTC volume create b849fa9d700db32863d5b9274009054230344a7cfa7c397b4f0e27b8c0fe7972
UTC container create a452371ad08fd77f2d88e9c6277b3d97a4c88fd8944bfe3db611a088541f2e14 (image=docker.io/library/postgres:16-alpine, name=gallant_varahamihira, org.testcontainers=true, org.testcontainers.lang=java, org.testcontainers.sessionId=95388cac-63dd-4156-a095-0ebe670f523c, org.testcontainers.version=1.19.8)
UTC container init a452371ad08fd77f2d88e9c6277b3d97a4c88fd8944bfe3db611a088541f2e14 (image=docker.io/library/postgres:16-alpine, name=gallant_varahamihira, org.testcontainers.version=1.19.8, org.testcontainers=true, org.testcontainers.lang=java, org.testcontainers.sessionId=95388cac-63dd-4156-a095-0ebe670f523c)
UTC container start a452371ad08fd77f2d88e9c6277b3d97a4c88fd8944bfe3db611a088541f2e14 (image=docker.io/library/postgres:16-alpine, name=gallant_varahamihira, org.testcontainers=true, org.testcontainers.lang=java, org.testcontainers.sessionId=95388cac-63dd-4156-a095-0ebe670f523c, org.testcontainers.version=1.19.8)
UTC container kill a452371ad08fd77f2d88e9c6277b3d97a4c88fd8944bfe3db611a088541f2e14 (image=docker.io/library/postgres:16-alpine, name=gallant_varahamihira, org.testcontainers.lang=java, org.testcontainers.sessionId=95388cac-63dd-4156-a095-0ebe670f523c, org.testcontainers.version=1.19.8, org.testcontainers=true)
UTC container died a452371ad08fd77f2d88e9c6277b3d97a4c88fd8944bfe3db611a088541f2e14 (image=docker.io/library/postgres:16-alpine, name=gallant_varahamihira, org.testcontainers.lang=java, org.testcontainers.sessionId=95388cac-63dd-4156-a095-0ebe670f523c, org.testcontainers.version=1.19.8, org.testcontainers=true)
UTC container cleanup a452371ad08fd77f2d88e9c6277b3d97a4c88fd8944bfe3db611a088541f2e14 (image=docker.io/library/postgres:16-alpine, name=gallant_varahamihira, org.testcontainers.lang=java, org.testcontainers.sessionId=95388cac-63dd-4156-a095-0ebe670f523c, org.testcontainers.version=1.19.8, org.testcontainers=true)
UTC container remove a452371ad08fd77f2d88e9c6277b3d97a4c88fd8944bfe3db611a088541f2e14 (image=docker.io/library/postgres:16-alpine, name=gallant_varahamihira, org.testcontainers=true, org.testcontainers.lang=java, org.testcontainers.sessionId=95388cac-63dd-4156-a095-0ebe670f523c, org.testcontainers.version=1.19.8)
UTC volume remove b849fa9d700db32863d5b9274009054230344a7cfa7c397b4f0e27b8c0fe7972
However, when piping the output of gradle somewhere else I also noticed an additional line:
Port mappings have been discarded as one of the Host, Container, Pod, and None network modes are in use
This appears to be similar to this issue #24988.
I copied the containers.conf from https://github.com/containers/image_build/tree/main/podman.
My understanding is that host networking interferes with port mapping, and this difference from Docker causes Testcontainers to fail.
Changing netns inside /etc/containers/containers.conf to something other than host triggers a new error:
com.github.dockerjava.api.exception.InternalServerErrorException: Status 500: {"cause":"/usr/bin/slirp4netns failed: \"open(\\\"/dev/net/tun\\\"): No such file or directory
It looks like both pasta and slirp need /dev/net/tun, but the unprivileged user is not able to create the device inside the container.
At this point, I'd like to stop guessing podman config without understanding it.
It would be great if someone has the time to explain the problem and how to fix the issue.
I am also happy for other suggestions to improve the overall setup.
Thanks for your help!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone,
I am trying to create a container image to execute CI jobs in Kubernetes (GitHub ARC).
However, I am facing issues when trying to build an application using Testcontainers.
I tried my best to come up with a reproducible example.
Initial version: https://github.com/jntille/arc-runner/tree/main/images/ubuntu-podman
podman build -t arc-runner:0.1 -f ./Dockerfile podman run -it arc-runner:0.1 /bin/bash # launch podman service ./pre-start.sh
The cli seems to work as expected. For example,
docker login --verbose docker.io
shows that podman login works and the auth file is located at$XDG_RUNTIME_DIR/containers/auth.json
.Similarly,
docker pull --log-level debug docker.io/mongo:5
anddocker pull mongo:5
are able to pull images from the registry.Communication with the podman service / api via the unix
podman.sock
also appears to work.Let's try building a sample application using testcontainers.
Looking at the logs we see that testcontainers was able to discover the podman socket and recognizes the env vars set in the dockerfile.
Podman uses $XDG_RUNTIME_DIR/containers/auth.json for credentials, but Testcontainers looks in the Docker default location.
This can be silenced via
ln -sf ${XDG_RUNTIME_DIR}/containers/auth.json ~/.docker/config.json
Testcontainers appears to be able to pull the image defined in the Java application, as podman images shows it inside the local registry.
It looks like it was able to start the container, as we can see logs coming from postgres.
But then something happens that causes testkube to fail and the container to terminate.
Watching
podman events
confirms that the container is killed:However, when piping the output of
gradle
somewhere else I also noticed an additional line:This appears to be similar to this issue #24988.
I copied the
containers.conf
from https://github.com/containers/image_build/tree/main/podman.My understanding is that host networking interferes with port mapping, and this difference from Docker causes Testcontainers to fail.
Changing netns inside /etc/containers/containers.conf to something other than host triggers a new error:
It looks like both pasta and slirp need /dev/net/tun, but the unprivileged user is not able to create the device inside the container.
At this point, I'd like to stop guessing podman config without understanding it.
It would be great if someone has the time to explain the problem and how to fix the issue.
I am also happy for other suggestions to improve the overall setup.
Thanks for your help!
Beta Was this translation helpful? Give feedback.
All reactions