You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _posts/2025-11-04-mmaler-blogpost-2-quarkus-runtime-and-framework-for-cloud-native-java.adoc
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -84,28 +84,28 @@ These features give developers structure, sensible defaults, and clear conventio
84
84
85
85
=== Performance that matters
86
86
87
-
In practice, the two goals that matter most for production-grade frameworks at scale are fast startup and resilience against cascading failures.
88
-
Rapid startup improves elasticity and recovery, while resilience prevents local faults from spreading and enables services to handle failures gracefully.
89
-
For a broader industry perspective, see the link:https://cacm.acm.org/practice/application-frameworks/[ACM article on application frameworks].
87
+
Teams optimize for different goals, such as startup latency, sustained throughput, memory footprint, elasticity, and cost.
88
+
Quarkus addresses these needs by shifting work from run time to build time, keeping one development model across JVM and native, and exposing production signals such as health checks, metrics, and tracing.
90
89
91
-
Quarkus shifts work from runtime to build time, thus enabling faster cold starts.
92
-
Additionally, it offers the option to generate native executables with GraalVM's native-image, enabling cold starts in milliseconds and a minimal memory footprint.
93
-
This results in higher pod density, quicker horizontal scaling, and lower idle costs.
94
-
As such, it enables cost-effective deployment in containers and serverless environments.
90
+
Start with **JVM mode** for most services.
91
+
It starts slower and uses more memory than native to start with, but just-in-time compilation raises steady-state throughput, scales well across cores, and offers mature garbage collectors and tuning options.
92
+
Quarkus can often start faster on the JVM than typical Spring Boot setups under comparable conditions because more work happens at build time.
95
93
96
-
Native images often provide faster startup and lower RSS. However, they can deliver lower peak throughput and exhibit scaling limits on multi-core machines.
97
-
Current native images typically use a single-threaded garbage collector, which is inefficient for large heaps and does not benefit from multiple cores.
98
-
We generally recommend native execution for services that run on smaller instances, up to 2 vCPUs with roughly 2-4 gigabytes of RAM, where startup time and memory footprint dominate cost and user experience.
94
+
Use **native mode** when startup latency and memory footprint are strict constraints.
95
+
Native executables start in milliseconds and can use less memory, which supports scale-to-zero workflows and lowers idle costs on small instances.
96
+
Trade-offs include lower peak throughput, limited multi-core scaling, a single-threaded garbage collector in current native images, longer build times, and a slower inner loop for developers.
99
97
100
-
JVM execution, by contrast, takes longer to start and warm up.
101
-
However, it can achieve higher peak performance through just-in-time compilation. It also scales well beyond two cores and it offers more capable garbage collectors and tuning options.
98
+
If your system needs both profiles, split by workload.
99
+
Run bursty or event-driven endpoints in native mode, and run long-lived high-throughput services on the JVM.
102
100
103
-
Choose the mode that matches your workload profile, cold-start targets, and cost envelope, and let Quarkus maintain a consistent development model across both paths.
101
+
**Observability note.**
102
+
JVM mode exposes richer diagnostics and metrics, including GC, heap, and thread telemetry, JFR, and profiler support, which makes issue triage and performance tuning easier.
103
+
Native mode still exports application-level metrics and traces with Micrometer and OpenTelemetry, but it offers fewer VM-level signals.
104
104
105
-
* If you optimize for native execution, audit reflection and resource usage, enable dead-code elimination-friendly patterns, and consider profile-guided optimizations where appropriate.
106
-
* If you optimize for JVM execution, budget for warm up, enable the right GC for your heap and latency goals, and measure steady-state throughput under realistic load.
105
+
Always measure your workload:
107
106
108
-
Quarkus also includes structured health checks, metrics, and tracing, which align the runtime with production standards from day one.
107
+
* For native, reduce reflection and dynamic class loading, trim resources, and consider profile-guided optimizations (PGO) where supported. PGO is not available in Mandrel and currently requires an Oracle GraalVM distribution that provides PGO.
108
+
* For the JVM, choose a garbage collector that matches your latency and heap goals, budget for warmup, and test steady-state throughput under realistic load.
109
109
110
110
Taken together, these in production choices provide measurable wins:
0 commit comments