So I've been wrapping a troublesome service in a Docker container today. This service has all sorts of assumptions where it "runs the show" on the server, all of those assumptions being wrong. So far it's been going pretty well. I'd already done most of the isolation work previously when I'd integrated it directly into a product, so it was just a matter to replicate that inside the Dockerfile. However, one of the things the service requires is access to a host "device" via an executable which is normally installed as a .deb which includes both a kernel module and a userspace client.
It seems that the correct way to handle this would be to expose the device inodes inside the container, that is, install the package on the host, then map the device into the container as an exposed file instance, but since the client isn't the same OS as the hosts (that being one of the points of using Docker), it's somewhat difficult to get the client to install the .deb (it can't pull the headers that match the running kernel as they aren't available on the container OS)... so I suppose I need to just manually copy over the user-space binaries from the host into the container? Seems kinda grotty, but I don't see another obvious solution in a bit of googling.
After I deal with that I can look into whether the container is going to support multicast reasonably well. It seems I may need to use "pipework" to get around multicast limitations.
Pingbacks are closed.