Skip to content
Torna al Blog

Docker-MC-Proxy: Running a Minecraft Server Network

ice
ice
@ice
Updated
34 visualizzazioni
TL;DR:Docker-mc-proxy runs Minecraft proxies like BungeeCord and Velocity in a container, letting you split players across multiple backend servers. Perfect for growing communities or high-traffic networks that need load balancing and redundancy.
GitHub · Minecraft community project

docker-mc-proxy (itzg/docker-mc-proxy)

Docker image that provides a choice of Minecraft proxies, such as BungeeCord and Velocity

Star on GitHub ↗
⭐ 320 stars💻 Shell📜 Apache-2.0

Managing multiple Minecraft servers feels like overkill until it's not. Maybe you've outgrown a single server, or you want to split players between a survival world and a creative realm without forcing everyone to choose. A proxy sits between players and your actual servers, handling the traffic and shuffling players where they need to go. Docker-mc-proxy does exactly that, and it's dead simple to set up if you know Docker basics.

What This Project Does

Docker-mc-proxy is a containerized image that runs a Minecraft proxy server. And it supports BungeeCord (the most common choice), Velocity (the newer, faster alternative), and Waterfall (a community fork of BungeeCord with more features). it acts as a middleman: players connect to the proxy's address, and the proxy forwards them to whichever backend server you've configured.

The image comes with built-in health checks using mc-monitor, so Docker and your orchestration tools know immediately if something's wrong. You're not squinting at logs wondering if the process is still alive.

It's one of the itzg suite of Docker images (320 stars on GitHub), which tells you it's not some abandoned one-off project. The maintainer keeps it updated with Java 25 support and regular bug fixes.


Why You'd Use This

Single-server setups break down once you hit a player threshold. A 1.20 world with 50 players crawling around starts lagging. You could bunker down and optimize the heck out of your server config, sure. Or you could split: survival on one server, minigames on another, and let the proxy handle routing. Players don't even notice the seam.

There's also redundancy. If your main server crashes, you can spin up a backup instance and the proxy still works. You're not telling 100 players "the server's dead, see you later."

Hosting providers love Docker deployments because they're predictable and resource-efficient. If you're running this on shared infrastructure or a VPS, containerization means you're not accidentally hogging RAM that breaks someone else's app.


Installation and Basic Setup

The simplest way to start is with Docker Compose. Here's a working example:

yaml
services:
 mc:
 image: itzg/minecraft-server
 environment:
 EULA: "TRUE"
 ONLINE_MODE: "FALSE"
 volumes: - mc-data:/data
 proxy:
 image: itzg/mc-proxy
 environment:
 TYPE: BUNGEECORD
 CFG_MOTD: "Powered by Docker"
 ports: - "25565:25577"
 volumes: -./config.yml:/config/config.yml - proxy-data:/server

volumes:
 mc-data:
 proxy-data:

The critical bit: `ONLINE_MODE=FALSE` on the backend server. Proxies need this because they're handling authentication, not the individual servers. Without it, players get kicked with a "not authenticated" error.

The proxy maps port 25565 (the standard Minecraft port) to its internal 25577. Players connect to your server IP on 25565, and the proxy does the rest.

If you want to customize the server MOTD (that description players see in their server list), use the `CFG_MOTD` environment variable, or better yet, grab our Minecraft MOTD Creator to design it visually and paste it in. You can even use color codes.

Now you need to configure where traffic goes. Create a `config.yml` file that tells the proxy which backend servers exist:

yaml
servers:
 survival:
 address: mc:25565
 restricted: false
 creative:
 address: creative-server:25565
 restricted: false
listeners: - query_port: 25577
 motd: "My Minecraft Network"
 tab_list: GLOBAL_PING
 default_server: survival

Players land on survival by default. They can hop between worlds using in-game commands.


Key Features That Matter

Built-in health checks are legitimately impressive. Docker can see whether the proxy is responsive without you writing custom scripts. Run `docker ps` and you'll see a `(healthy)` status instead of just `(up)`. This matters if you're running Kubernetes or any orchestration platform where dead containers get replaced automatically.

Memory management is more flexible than you'd expect. The default is 512m, but you can tune it or let the JVM auto-size based on your container memory limit. If you're squeezing this onto a tiny VPS, there's a lot of knobs you can turn.

Support for custom proxy JAR files means you're not locked into the three bundled options. Set `TYPE=CUSTOM`, point `BUNGEE_JAR_URL` to wherever your JAR lives, and you're done. Some communities run forks with custom features; this accounts for that.

The image syncs configuration from `/config` on startup. If you're updating your proxy config via Docker volumes (which you should be), it picks up changes without needing to rebuild the image. Respects your config over anything bundled.


Gotchas and Common Mistakes

The biggest one: forgetting `ONLINE_MODE=FALSE` on backend servers. You'll see "not authenticated" errors and think something's broken when really you've just missed one flag.

Port forwarding confusion comes up too. The container internally uses 25577, but you can map it to whatever external port you want. If you do `ports: "25565:25577"`, players connect to 25565, not 25577. Don't accidentally open 25577 on your firewall and wonder why nobody can join.

Some people spin up multiple backend servers but don't realize the proxy needs to reach them over the network. If your servers are in separate containers, use service names (like `mc:25565` in compose) rather than localhost. Docker's internal DNS handles routing.

Actually, there's another thing I keep forgetting: if you're upgrading the proxy JAR version, set `BUNGEE_JAR_REVISION` to a new number to force a re-download. Otherwise it'll cache the old version and you'll wonder why your fixes didn't apply.


When Proxies Aren't the Answer

If you've got under 30 concurrent players and no performance issues, a proxy is extra complexity you don't need. Paper and Purpur can handle decent player counts with solid optimization.

Proxies also add latency. It's usually negligible on a LAN or modern internet, but in extreme cases (very old hardware, wonky networking), you might notice tick delays. Velocity is better about this than BungeeCord, but it's worth benchmarking for your setup.

If you're running a competitive PvP server where every millisecond matters, you'll want to test whether the proxy's overhead is acceptable. Most players won't notice. Hardcore PvPers will.


Alternatives Worth Considering

Geyser (also itzg-maintained) is a different beast entirely - it translates Bedrock connections to Java Edition, not load-balancing. Not a replacement, but sometimes people confuse them.

Traefik or Nginx can technically reverse-proxy Minecraft if you're running everything on Kubernetes and want unified ingress, but they're overkill. Velocity is simpler and faster for this specific use case.

If you want something even lighter, there are single-purpose proxy projects floating around on GitHub, but they're usually less maintained and miss the operational niceties (health checks, easy config reloads) that make docker-mc-proxy reliable for production.


Running This in Production

Docker makes this part smooth. Set resource limits, use restart policies, and let Docker handle failures. A typical setup reserves 512m-1g for the proxy itself unless you're routing hundreds of concurrent players.

Log aggregation is your friend. The proxy outputs a lot of useful diagnostics. If you're running on a host with multiple services, feed those logs somewhere central (syslog, ELK, CloudWatch, whatever) so you can actually debug issues when they surface.

For the MOTD, if you're rotating seasonal messages or want something dynamic, you can use environment variable substitution. Set `REPLACE_ENV_VARIABLES=true` and reference `${SOME_VAR}` in your config file.

Also worth noting: if you need a custom server icon, the image can download and convert one automatically. Set the `ICON` environment variable to a URL and it'll handle the 64x64 PNG conversion for you. If you already have an icon in `/server`, use `OVERRIDE_ICON=true` to replace it. You can use our Minecraft Block Search to find specific block IDs if you're building block-art server icons in creative mode first (yes, people do this).

itzg/docker-mc-proxy - Apache-2.0, ★320

Frequently Asked Questions

What's the difference between Velocity and BungeeCord in docker-mc-proxy?
Velocity is newer and faster, designed from the ground up for modern Java. BungeeCord is more mature and widely supported. Waterfall sits in between as a BungeeCord fork with extra features. Choose Velocity for new projects, BungeeCord if you need maximum compatibility with existing plugins.
Do I need separate servers for the proxy and backends?
No, you can run them on the same Docker host. Use Docker networks or compose services to let them communicate. The proxy and backend servers just need to reach each other over the network, which Docker handles automatically when they're in the same compose file.
Can players move between servers on the proxy?
Yes. BungeeCord and Velocity support server switching via `/server` commands. You configure which backend servers exist in the proxy's config, and players can jump between them. The proxy keeps them logged in across servers (if they're in online-mode).
Is docker-mc-proxy free and open source?
Yes, it's Apache-2.0 licensed and maintained on GitHub. It's completely free, no premium or restricted features. The project is actively maintained with regular updates for Java versions and security patches.
What happens if a backend server goes offline?
Players on the proxy will get disconnected if their server crashes, unless you configure fallback behavior. With multiple servers, players on other servers continue fine. Set up monitoring and auto-restart policies in Docker to minimize downtime.