Skip to content
Вернуться в блог
Spark performance profiler interface showing CPU usage breakdown and thread analysis for a Minecraft server

Spark: The Performance Profiler Your Minecraft Server Actually Needs

ice
ice
@ice
Updated
109 просмотров
TL;DR:Spark is a lightweight performance profiler for Minecraft servers, clients, and proxies that diagnoses lag, CPU usage, and memory issues in seconds. Install the plugin, run a quick profile, and get detailed insights into exactly what's slowing down your game.
🐙 Open-source Minecraft project

lucko/spark

A performance profiler for Minecraft clients, servers, and proxies.

⭐ 1,269 stars💻 Java📜 GPL-3.0
View on GitHub ↗

If your Minecraft server is lagging and you've no idea why, you're flying blind. Could be CPU, could be memory, could be a plugin running hot in the background. Spark cuts through the guesswork with real-time profiling data that shows you exactly where your performance problems live.

What Spark Does (In Plain Terms)

Spark is a performance profiler that works on Minecraft clients, servers, and proxies. Think of it like opening the hood on your car to see what parts are misfiring. Instead of guessing why your server drops to 10 TPS, you run Spark, wait 30 seconds, and get a detailed breakdown of what's eating your resources.

The tool has three main capabilities:

  • CPU profiling - shows which threads and code paths are hogging processor time
  • Memory inspection - lets you peek at heap usage, take snapshots, and monitor garbage collection
  • Server health metrics - tracks TPS, tick duration, CPU usage, memory, and disk space all in one place

Running it doesn't require configuration. Drop the mod or plugin, type a command, and you're collecting data.


Why You'd Use This

There are a few scenarios where Spark becomes invaluable.

First: your server starts dropping frames for no obvious reason. You've disabled plugins one by one, cleared chunk caches, restarted the server. Still laggy. Spark shows you in 30 seconds that your ItemStack handling is the bottleneck, or that one plugin's async task is blocking the main thread. No more wild guesses.

Second: you're running a larger server and want to be proactive. You can leave Spark monitoring in the background, checking metrics every few minutes, and it alerts you when something goes sideways before players notice.

Third: you're dealing with a client-side performance issue. Maybe your machine tanks to 30 FPS in certain areas. Spark on the client side shows you whether it's rendering overhead, entity processing, or world loading that's the culprit.

And if you work with server networks or Bungeecord proxies, Spark covers those too.


Installing Spark

Installation depends on your setup.

GitHub project card for lucko/spark
GitHub project card for lucko/spark

For Paper/Spigot servers:

bash
1. Download the plugin from spark.lucko.me/download
2. Drop the JAR file into your plugins/ folder
3. Restart the server
4. Done

The plugin registers commands automatically. No configuration file to touch, no dependencies to juggle.

For Fabric/Forge clients:

bash
1. Grab the Fabric/Forge version from the downloads page
2. Move it to your mods/ folder
3. Launch the game
4. Use /spark commands in-game or in chat

For Bungeecord or Velocity proxies:

bash
1. Place the JAR in your proxy's plugins folder
2. Restart the proxy
3. Spark is ready to profile your network traffic

That's it. If you've installed other plugins, you've already done harder things than setting up Spark.


Key Features That Matter

The CPU Profiler is where most people start. Run `/spark profiler start`, go about your business for 30 seconds to 2 minutes, then `/spark profiler stop`. Spark generates a link to an interactive viewer showing a call tree. You can see that 40% of your CPU time is in EntityAI pathfinding, or 25% is in lighting recalculation. The tree is readable and you can apply deobfuscation mappings if you want to dig into Minecraft's internals.

What makes this different from other profilers: it's lightweight enough to run on production servers without tanking performance, and it gives you results in seconds instead of minutes or hours of analysis.

Memory inspection comes in three flavors. Heap summary gives you a quick snapshot of what classes are taking up the most memory (like block entities, entity objects, or chunk data). Heap dump takes a full JVM snapshot you can analyze with conventional tools like Eclipse MAT or JProfiler if you really need to dig. GC monitoring shows you when garbage collection runs, how long it takes, and how much memory gets freed up, which is useful for tuning your JVM launch flags.

Most of the time you don't need heap dumps. The summary view answers 80% of memory questions.

Health reporting is the boring but useful part. `/spark tps` gives you more accurate tick timing than vanilla `/tps`. You can see min/max/average tick durations, which tells you whether your lag is consistent (something is always slow) or spiky (something is intermittently freezing the server). Same with CPU and memory metrics.

If you're integrating Spark with monitoring systems, the health reports can feed into that pipeline.


Tips, Gotchas, and What Trips People Up

One mistake: running the profiler for too short a time. Thirty seconds is the minimum; a minute or two is better. If you profile for 5 seconds, you might catch a blip rather than the actual problem.

Witch Brawl in Minecraft
Witch Brawl in Minecraft

Another: assuming the profiler output is 100% accurate. Sampling-based profiling has statistical noise. If the top result shows 39.8% vs 39.7% CPU time, they're probably the same. Look for the big outliers, not the tiny differences.

Some newer server setups run on newer Java versions (21+) with different profiling backends. Spark handles this, but if you're on an older JVM, you might only have access to the Java profiler engine, not the native/async-profiler variant. Both work fine; native is just lower overhead.

Also worth noting: Spark requires viewing the profiler output in a web browser. You can't read the output directly from the server console. The results are hosted temporarily on spark.lucko.me, so you need internet access to view them (or you can self-host the viewer, but that's overkill for most people).

If you're managing a network with dozens of servers, profiling each one manually gets tedious. Plan for that.


Related Tools and Alternatives

A few other projects do similar work, though Spark has become the default for most server admins.

WarmRoast was the original Java profiler for Minecraft, and Spark actually evolved from it. If you're on a super old server setup, WarmRoast might be your only option, but there's no good reason to use it over Spark these days.

Timings (from older Spigot versions) provided some of this info but was nowhere near as detailed or useful as Spark's profiling. If your server still has Timings built in, Spark is a massive upgrade.

JFR (Java Flight Recorder) is a built-in JVM profiler available in Java 11+. It's powerful but also overkill for most Minecraft scenarios and harder to interpret. Spark gives you the same info in a more useful format.

For client-side performance, there's no real equivalent to Spark's client profiler in the Minecraft ecosystem. Most people resort to F3 debugging or trial-and-error.

If you're building a server and want to think about performance from the design stage, tools like Minecraft's text generator can help you avoid chat spam (a subtle performance drain), and understanding your world structure with tools like the block search tool can help you identify resource-heavy regions before they become problems.


When to Just Run It

You don't need a special reason to profile your server. If you're curious about performance, run Spark. The worst case: you learn that your server is already running great. This best case: you find a plugin that's using 50% of your CPU and didn't realize it was installed.

Most server admins run Spark once when something breaks, fix the issue, and move on. Some proactive server operators profile monthly just to catch creep (that gradual performance degradation as more data accumulates).

Either way, it's free, it's lightweight, and it answers performance questions in minutes instead of hours of guessing.

lucko/spark - GPL-3.0, ★1269

Frequently Asked Questions

Is Spark free and open source?
Yes. Spark is licensed under GPL-3.0 and completely free to use. The source code is available on GitHub at lucko/spark. There are no premium features or paid tiers.
Can I use Spark on a client to find FPS issues?
Yes. Spark works on Fabric and Forge clients to profile your local game performance. Use /spark profiler to identify whether FPS drops are caused by rendering, entity processing, or chunk loading.
What's the difference between native and Java profiling engines?
The native engine (async-profiler) is lower overhead and more detailed, but only works on Linux and macOS. The Java engine works everywhere and gives similar results with slightly higher overhead. Most users won't notice the difference.
Do I need to configure Spark after installing it?
No. Drop the plugin into your plugins folder, restart, and start profiling. There's no configuration file to edit. All options are available via in-game commands like /spark tps or /spark profiler.
Can Spark monitor performance over time automatically?
Spark includes health monitoring features that can track TPS, CPU, and memory metrics. You can integrate these metrics into external monitoring systems, though continuous profiling requires manual check-ins for most setups.