QtGrpc - Tips, Tricks & Sweet Spots
June 30, 2025 by Dennis Oberst | Comments
Since Qt 6.8, the Qt GRPC and Qt Protobuf modules have officially moved out of technical preview and are now fully supported parts of the Qt framework. In this blog post, we’ll take a look at what’s changed, where things are headed, and what we’ve learned along the way—from performance benchmarks to hidden gems worth knowing about.
What happened?
After stabilizing the public APIs, both modules now offer the kind of inter-version compatibility you'd expect from a mature Qt module. Since then, we’ve continued to refine things further: fixing bugs, simplifying internal logic, and improving performance across the board in both Qt GRPC and Qt Protobuf. We also revisited the documentation and examples, giving them a complete refresh to make them clearer and more helpful.
If you're using RPC or serialization in your projects, Qt 6.9 brings a number of improvements that make working with these technologies easier and faster. In this post, we’ll go over some of the key changes.
Documentation & Learning
If you’re new to Qt GRPC or want to catch up on the latest best practices, the brand-new Qt GRPC Client Guide is the perfect place to start.
For those exploring more advanced use cases, we’ve reworked the Chat Example into a fully functional, cross-platform chat app. It includes practical tips, useful techniques, and deeper insights to help you get the most out of Qt GRPC.
Benchmarks & Performance
We invested quite a bit of time to understand how we’re performing in the gRPC™ ecosystem. While we haven’t hit the peak just yet, we still want to share what we’ve learned and where we currently stand. Naturally, we compared our results to the reference grpc-c++ implementation to provide a solid point of reference.
All benchmarks were conducted using Qt version 6.9. For the transport layer, we selected Unix Domain Sockets (UDS) to eliminate the overhead of the TCP networking stack and focus purely on client-side performance characteristics. We measured the system's behavior across a wide range of payload sizes, from empty messages to the default gRPC maximum of 4 MiB.
In the following sections, we’ll examine how average latency, queries per second (QPS), and throughput change as message sizes grow. All clients communicated with the same grpc-c++ server implementation. You can find the benchmark code here.
Average Latency
Up to a payload size of around 64 KB per message, latency stays nearly flat. Beyond that, latency starts to become noticeable. Here’s a look at the data on a logarithmic scale:
We observe the expected exponential increase in latency, which appears linear on a logarithmic scale. Qt GRPC performs particularly well with small payloads, achieving impressively low latencies in streaming scenarios, even reaching the nanosecond range. As payload sizes grow, however, we see a drop in robustness.
Queries Per Second (QPS)
Looking at the number of queries processed confirms what we saw in the Average Latency results. QtGrpc delivers excellent performance with small payloads, reaching up to 1.2 million QPS in client-streaming scenarios. With larger payloads, it trails the grpc-c++ implementation, highlighting an opportunity for future optimization.
Throughput
Throughput measures how much data can be transferred per second for messages of a fixed size. This metric shows us the speed at which the system can process data streams. Currently, Qt GRPC performs well up to payloads around 32 KB but then slows down, while grpc-c++ continues increasing throughput, reaching up to 7 GB/s before tapering off around 1.5 MB payloads.
Key Takeaways
To start with, it’s clear that selecting the right payload size has a major impact on both latency and throughput. Finding the optimal size for your specific scenario is crucial. If you’re aiming for ultra-low latency, stick to small messages up to around 32 KB. For maximizing throughput, larger messages up to 1.5 MB tend to perform better, with continued progress expected in Qt GRPC’s performance for these payloads.
Choosing the right RPC type for your workload can make a significant difference. For instance, if your use case mainly involves client-side streaming but you’re using bidirectional streaming to handle irregular responses from the server, splitting these into separate client and server streams might improve throughput. Thanks to HTTP/2 multiplexing, gRPC efficiently manages multiple streams, so don’t hesitate to separate concerns by opening multiple streams when it makes sense.
Unary calls might seem slower in long-running benchmarks, but they have their strengths. For infrequent requests, unary calls can achieve lower per-call latency since each request is sent immediately, whereas streaming calls often batch messages, which can introduce some delay.
If you want to dive deeper, check out my talk from the Qt World Summit 2025 on how to Speed up with QtGrpc & QtProtobuf.
What's Next?
We’re very pleased with the latency improvements we’ve achieved so far. Our next priority is to focus on throughput, where further optimization is needed. If improving throughput impacts latency, we will provide configuration options to balance these trade-offs. Ideally, our goal is to offer smart defaults that deliver both excellent latency and high throughput.
To support continuous progress, we have established a dedicated benchmarking environment that allows us to systematically evaluate performance and drive ongoing improvements.
Next steps include adding client-side interceptors and client-side service configuration. Server-side support is also planned, though it will require more time and careful design.
If there are features you rely on or ideas you'd like to see explored, feel free to share them in the comments.
Blog Topics:
Comments
Subscribe to our newsletter
Subscribe Newsletter
Try Qt 6.9 Now!
Download the latest release here: www.qt.io/download.
Qt 6.9 is now available, with new features and improvements for application developers and device creators.
We're Hiring
Check out all our open positions here and follow us on Instagram to see what it's like to be #QtPeople.