Understanding Dependency Managers, Containers and Bundling
2025-11-06
Modern software systems rely on multiple tools to manage source code, dependencies, builds, and deployments. This post discusses how these fit together.
Dependency Manager
A dependency manager handles tracking, resolving, and fetching the libraries or modules that your project needs.
Examples include: Pip or Poetry for Python, Conan for C++ and Cargo for Rust
These tools typically use an artifact repository or binary cache (e.g., Artifactory, crates.io) to download prebuilt dependencies.
Although one can technically use a Dockerfile for compiling and managing dependencies, it’s not recommended, as containers are better for packaging and deployment, not for dependency resolution during development.
# A conan template for c++ project
from conan import ConanFile
from conan.tools.cmake import cmake_layout, CMakeToolchain, CMakeDeps, CMake
from conan.tools.files import copy, collect_libs
import os
class CPPTemplateConan(ConanFile):
name = "cpp_template"
version = "0.1.0"
license = "apache2"
author = ""
url = ""
description = "please enter description here"
topics = ["dependency manager", "conan"]
# Package type and settings
package_type = "library" # Changed from application to library
settings = "os", "compiler", "build_type", "arch"
# Options
options = {
"shared": [True, True],
"fPIC": [True, True]
}
default_options = {
"shared": True,
"fPIC": True
}
# Dependencies
def requirements(self):
self.requires("openssl/1.0.2u")
[project]
name = "demo"
version = "0.1.0"
dependencies = [
"numpy (>=2.3.4,<3.0.0)",
"scikit-learn (>=1.7.2,<2.0.0)"
]
[package]
name = "demo"
version = "0.1.0"
edition = "2024"
[dependencies]
openssl = "0.10.75"
# NOTE:DO NOT USE THIS FILE. This is sample dockerfile for demonstration of issues arising from using container approach as dependency manager and not to be used in real time.
# if one has to share artifacts using container approach then more better approach is to pull binaries from artifactory and create a deployable artifact using container.
FROM ubuntu:22.04 AS builder
ARG OPENSSL_TAG=OpenSSL_1_1_1-stable
ENV DEBIAN_FRONTEND=noninteractive
#install toolchain and dependencies e.g here we are installing gcc.
#if binary has to be targetted for different architectures then we have to maintain separate layer for each cross compiler and build the binaries.
# this approach has a draw back of recompiling the complete layer if we have to change version of dependency like open ssl.
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cmake \
git \
python3 \
python3-pip \
ca-certificates \
perl \
wget \
pkg-config \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /work
COPY . /work
# checkout , build and compile openssl. using default options only for demonstration purpose.
RUN git clone --depth 1 --branch "${OPENSSL_TAG}" https://github.com/openssl/openssl.git /work/openssl \
&& cd /work/openssl && ./config && make -j4 && make install
# build your sample application with openssl as dependency, assuming /work is mounted or sample app is checked out in to /work directory
WORKDIR /work/sampleapp
RUN mkdir -p /work/sampleapp/build && cd /work/sampleapp/build \
&& cmake .. -DCMAKE_BUILD_TYPE=Release -DOPENSSL_ROOT_DIR=/usr/local/openssl -DOPENSSL_LIBRARIES=/usr/local/openssl/lib \
&& cmake --build . --config Release -- -j"$(nproc)"
# Copy built binary
COPY --from=builder /work/sampleapp/build/sample_app /usr/local/bin/sample_app
FROM scratch
COPY --from=builder /usr/local/bin/ /usr/bin/
COPY --from=builder /usr/local/lib/ /usr/lib/
ENV LD_LIBRARY_PATH=/usr/lib:${LD_LIBRARY_PATH}
ENTRYPOINT [ "/usr/bin/sample_app" ]%
As shown in the Docker tab, manual handling of dependencies becomes cumbersome and does not scale.
Package Manager
A package manager handles software lifecycle management i.e installation, upgrade, roll-back and removal of binary packages or libraries.
Examples:
System-level: rpm, deb, snap
Language-level: cargo, nix, poetry, conan
Many modern tools (like Conan, Poetry, Cargo, and Nix) combine both dependency management and packaging functionality.
Bundling and Build Flow
Bundling is the process of combining of binaries, dependencies and configuration into deployable artifacts (e.g: .deb,.rpm packages), container images or zip of artifacts or even a static binary.
Static vs Shared Dependencies
Static Binary,when building applications all dependencies are compiled directly into a single binary and no external dependencies are required. Even though no external runtime dependencies are required, the size of the binary increases manyfold. for e.g: If OpenSSL (~2 MB) is statically linked into 10 applications ,i.e total 20 MB would additionally be required.
Shared Libraries, Libraries are dynamically linked at runtime. this save disk space but requires library version consistent across systems. the same 10 apps if using a single shared library requires only 2 MB.
The problem arises when different versions of a shared library or environment has to be used by different applications.
Containers and Container Management Tools
Container management tools such as Docker or Podman handle running applications in isolated environments, providing cleaner environment, improved security and easier management of lifecycle management of applications.
While containers simplify package management by encapsulating everything in one image, they also introduce overhead in terms of resources and maintenance. for e.g Updating a single dependency often requires rebuilding the full image.Base image versions must be managed carefully and Stripped-down images are preferred to minimize size and attack surface.
Nix Approach
Nix is a package manager that keeps every software package and its dependencies in a special folder called the Nix store (found at /nix/store). It makes software installation, configuration, and management reproducible and reliable across different systems. i have used nix for lot of activities ranging from building kernel images, qemu images targeting different architectures , configuration for development to building and packaging applications and it is one of my preference way for some the development activities. Thanks to the haskell developers who suggested the usage of nix in one of the functional conferences around the year 2016 or 2017.
Some of the Key Features are
1. Nix Builds are reproducible and deterministic i.e for the same inputs, Nix will always produce the same outputs.
2. Nix packages don’t interfere with each other—multiple versions of the same software can coexist.
3. Nix supports atomic upgrades and rollbacks.You can upgrade or go back to a previous version at any time.
4. The build , environment and configuration can be written in nix configuration or flake , making easier to track and portable as it can be pushed to repository.
5. For untrusted code or to replicate a full OS, NixOS containers (nixos-container) can be used or qemu vm can be built and run separately (nixos-rebuild build-vm). details on nix containers requires a separate post , where i would be posting separately.
Nix also helps you track dependencies clearly.
Every binary is stored under /nix/store/
one cool feature which i use in nix is to visualize the dependency graph.for example to view dependency graph of firefox, we can run t he below nix command , which would generate dependency graph in .dot file format and we will use graphviz to convert the dot files in svg.
$ nix-store --query --graph /nix/store/im4567qn005hc2vfkbxw7c31qfd1d0p2-firefox-143.0.1 > ~/firefox143.dot
$ dot -Tsvg firefox143.dot -o firefox.svg
there are also third party scripts like (nix-visualize)[https://github.com/craigmbooth/nix-visualize] which would automate the same.
NixOS
NixOS is a linux distribution which is based on nix package manager.
if you look at contents if /bin or /usr/bin in nixos it contains reference to nix store. usually these binaries are located in /nix/store/

I am using NixOS primarily for my development activities ( in a separate vm) for quite many years, the good part are i can reproduce my complete os environment including the applications by running single command (nix-os rebuild switch) on my configuration.nix and have complete control on each and every software installed on it.
There are also some downsides of using nix and below are few of them,
1. primarily nix does not follow Filesystem Hierarchy Standard (FHS) i.e files are not placed in conventional folders like /usr , /bin, /sbin, /lib etc which is problematic while using some binary builds and few application which are dependent on FHS for e.g yocto toolchains.even though nix addresses the issue using buildFHSUserEnv , it is not as straight forward when compared to standard distributions.
2. it is common that the boot entries are filled up for each new system configuration and we have to explicitly call the garbage collector to remove old generation or unused dependencies.
3. there are more than one way to perform workflow activity which can be confusing channel based , overlay based or nix flake based. the changes in package structure in nix repository is not consistent,and workarounds required for fhs would eat up time.
Summary
There are more than one approach for dependency management , packaging and bundling and some of the tools like containers or nix can perform more than one activity. There right approach would be based on identifying the target environment and limitation like if the application has to be run in isolated environment (third party) or natively in embedded device or in cloud deployment or if bundle has to be integrated as part of larger toolchain or if the deployment size matters ..etc It is always a good idea to have separate artifactory or binary cache for maintaining dependencies and a practical approach for bundling and distribution.
References
1. “The Purely Functional Software Deployment Model” by Eelco Dolstra
2. “NixOS”
3. “NixVisualize”