OpenVPN Data Channel Offload (DCO): The Definitive Guide to the Performance Boost Making OpenVPN The Fastest VPN Protocol
By Adam Bullock
OpenVPN Data Channel Offload (DCO) is among the most impactful performance innovations in VPN history which was designed to shift the performance-critical data path of OpenVPN into the Linux kernel for dramatic throughput and efficiency gains.
Built by OpenVPN engineer Antonio Quartulli and merged into Linux kernel 6.16, DCO redefines what software VPNs can achieve. Read on to learn more about what DCO is, why it matters, how to use it, real performance numbers from the creator himself in tests he has ran (as of February 2026) along with benchmarks of speeds showing OpenVPN-DCO as faster than Wireguard, frequently asked questions, and more.
Explore this content with AI:
ChatGPT | Perplexity | Claude | Google AI Mode | Grok
What is OpenVPN DCO and why It matters
Traditional OpenVPN runs both control and data planes in userspace. That means every data packet has to be copied from kernel space to userspace and back. This process limits throughput and increases CPU overhead.
DCO changes that in the following ways:
- The data plane (payload encryption/decryption, encapsulation) is handled inside the kernel.
- Eliminates costly context switches and redundant packet copies.
- Significantly improves throughput, lowers CPU usage, and reduces latency.
Kernel vs user space: the core difference
| Before DCO: | With DCO: |
| 1. Packet arrives in kernel | All data packet work happens inside the kernel; no back-and-forth. |
| 2. Passed up to userspace | Only control traffic stays in userspace |
| 3. Userspace encrypts/decrypts | Distribute cryptographic tasks across multiple CPU cores better than a userspace application. |
| 4. Passed back down to kernel | |
| 5. Sent out |
This is all for the same reason that high-performance networking technologies like XDP or AF_XDP exist: removing overhead gets you closer to line rate.
Real performance numbers from Antonio Quartulli & other 3rd party companies using OpenVPN protocol
Antonio Quartulli, OpenVPN Principal Engineer and lead DCO author, recently shared benchmark data from his AMD-based workstation (Ryzen 9 9950X with a 25 Gbps Mellanox NIC) comparing legacy OpenVPN vs DCO enabled.
UDP Benchmarks (iperf3):
- Legacy OpenVPN (userspace,
--disable-dco):- ~3.61 Gbps
- OpenVPN with DCO enabled:
- ~7.15 Gbps
Nearly 2× throughput improvement on the same hardware.
Antonio notes that results vary by hardware and workload, but the throughput boost is clear and consistent. He also highlights ongoing performance work for both UDP and TCP transports. His LinkedIn post can be found here.
Companies that utilize OpenVPN protocol for their solutions have also ran tests and verified vastly improved performance.
-
Norton VPN shows a 2x speed increase and a -15% decrease in latency (source)
-
ExpressVPN recording up to a 2000% increase in performance on UDP traffic (source)
-
Windscribe showing nearly gigabyte speeds from their tests (source)
Hardware manufacturers now highlight the speed increases, as well. The Mudi 7, a 5G NR tri-band Wi-Fi 7 travel router released April 2026 from GL.iNet, shows a higher speed with OpenVPN-DCO (700 Mbps) than WireGuard® (600 Mbps). In addition, the Flint 3, their first tri-band Wi-Fi 7 home router, released June 2025, shows the exact same speeds when testing performance for OpenVPN-DCO and WireGuard®.
These results show that OpenVPN with DCO is no longer a niche optimization. This puts OpenVPN performance on par with other modern VPN approaches and often surpasses them.
See the speed for yourself
Both self-hosted Access Server and cloud-delivered service CloudConnexa feature DCO & free trials. Run your own tests and benchmark how fast OpenVPN DCO is in minutes.
Read MoreWhy the Linux Kernel Merge Matter
In April 2025, OpenVPN DCO was merged into the Linux kernel mainline and shipped starting with Linux 6.16, a huge endorsement from the Linux community.
Benefits include:
- Automatic availability on major Linux distros shipping kernel 6.16+.
- Continued code quality improvements, security patching, and maintainability as part of the kernel tree.
- Broader ecosystem support without manual kernel modules.
This integration means DCO isn’t an add-on: it’s now part of the Linux networking future. And since servers, virtual machines, and other platforms across the world run on Linux, performance gains are seen no matter the OS, device, or platform.
How to Enable DCO in OpenVPN Access Server
Important note: while configuration is necessary for Access Server, the cloud-delivered CloudConnexa commercial product from OpenVPN supports DCO natively with no configuration needed.
When DCO is enabled in both places, server and client, you get the best performance.
OpenVPN Access Server provides first-class support for DCO. Here's how to turn it on:
- Prerequisites
- OpenVPN Access Server 2.12+
- Linux distro with a kernel supporting DCO (6.16+) or a DKMS module
- Install DCO Kernel Module
# Debian/Ubuntuapt install openvpn-dco-dkms# RHEL/CentOSyum install kmod-ovpn-dco- Reboot if using DKMS modules.
- Enable DCO via Admin UI
- Go to Configuration -> Advanced VPN
- Check "Prefer kernel OpenVPN data channel offloading (ovpn-dco)"
- Save and restart.
- Enable via CLI (optional)
sacli -k "vpn.server.daemon.ovpndco" -v "true" ConfigPutsacli start
- Verify
- Admin UI shows kernel offload active
ip -details link showlists ovpn-dco interfaces.
Note: Secure Boot may require signed modules.
Typical performance gains (observed & reported)
Here's how DCO performance compares to legacy OpenVPN:
| Metric | Legacy OpenVPN | OpenVPN with DCO |
| Throughput | ~3.5 Gbps (userspace UDP on high-end hardware) | ~7.1 Gbps+ (kernel DCO) |
| CPU Load | Higher | Lower |
| Latency | Higher | Lower |
| Efficiency | Moderate | High |
| Multithreading | Limited | Kernel-level scalability |
Independent industry benchmarks show 3x-10x gains on some hardware and use cases.
What this means in practice
Enterprise & Cloud VPNs
-
Supports large scale remote access with lower CPU costs.
- Great for cloud gateways with high throughput requirements.
High-Speed Networking
-
DCO shines on 10 Gbps+ interfaces, where use-space overheads traditionally throttle performance.
Latency-Sensitive User Cases
- Less overhead improves real-time applications (VoIP, gaming, streaming).
Router & Edge Devices
-
Embedded Linux devices get better VPN speeds without specialized hardware.
Ecosystem & Adoption
Now that DCO is part of the Linux kernel, it's gaining broader adoption:
-
Major distributions ship Linux 6.16+ with support.
- Community projects and router distros are integrating support.
- Some consumer VPN clients are experimenting with DCO accelerations (and seeing improved performance and speed as a result)
Summary
OpenVPN DCO is no longer a fringe enhancement; it’s the new performance baseline for all users of OpenVPN: the protocol, the community version, and the commercial options for business Access Server and CloudConnexa.
-
Integrated into the Linux kernel
- Dramatic throughput gains (2x+ in real tests)
- Lower CPU usage, reduced latency
- Easy to enable in Access Server (and on by default in CloudConnexa)
- Broad ecosystem support
Whether you’re evaluating high-performance VPNs, architecting cloud interconnects, or tuning remote access systems, DCO delivers real, measurable benefits.
Frequently Asked Questions
Does DCO change the security model of OpenVPN?
No, it does not change the core security model.
DCO does not alter how OpenVPN protocol authenticates peers or how TLS key exchanges occur; those control plane functions remain in userspace. The security posture, including certificate/TLS negotiation and encryption algorithms, is unchanged. DCO simply accelerates where encrypted packets are processed (in the kernel rather than userspace), preserving the same cryptographic protections used by OpenVPN.
Do both the client and server need to support DCO?
No, only one side needs DCO to see performance improvements.
A DCO-enabled server alone yields a boost because the kernel handles the server’s packet path more efficiently. However, enabling DCO on both server and client results in the maximum performance gain. Clients that support DCO include OpenVPN Connect 3.4+, OpenVPN 2.6.0+, and compatible open source clients.
What this means in plain language: no matter the platform or device, if the server is DCO-enabled, you’ll see performance enhancements. This is true for Windows, Mac, Android, iPhone, and everything in-between.
How does DCO compare to switching to a different VPN protocol?
DCO does not change the OpenVPN protocol, but rather optimizes its implementation. Protocols like WireGuard are designed with kernel integration from day one, often yielding high performance with minimal configuration. OpenVPN with DCO gives speeds at or beyond other protocols while retaining the highly-regarded flexibility, ecosystem compatibility, and extensive feature set like advanced authentication.
In practice:
WireGuard: minimal design, highly performant.
OpenVPN with DCO: retains OpenVPN’s flexibility and richness while meeting and surpassing the performance of other protocols.
So the choice between protocols may still depend on feature requirements; DCO removes performance as a key disadvantage for OpenVPN, making it competitive with alternatives.
Is DCO available in cloud marketplace deployments like the AWS and Azure Marketplaces?
DCO is available wherever the Linux kernel supports it, including cloud environments with the appropriate OS images. This is on the AWS Marketplace, Azure Marketplace, and more.
What matters most is the underlying kernel, as DCO is a kernel-level feature once available in the distro image.
Specifically for Access Server, it ships with its own DCO add-on which can be enabled using the instructions mentioned above.
Have other companies incorporated DCO into their use of the OpenVPN protocol for their solutions?
Yes, Windscribe incorporated it into their service in March 2025. Norton VPN integrated it in September 2025 and proclaimed, “The results revealed a dramatic boost with DCO enabled: connection speeds more than doubled, and latency fell by 15%.” ExpressVPN implemented DCO in October 2025 and noted “With DCO implemented on OpenVPN, we’ve seen significant improvements with internal tests recording up to a 2000% increase in performance on UDP traffic."
How does DCO help with Zero Trust VPN and ZTNA?
DCO accelerates how fast and efficiently encrypted traffic flows without changing policy enforcement. In a Zero Trust or ZTNA architecture, policy checks, user/device context, and access decisions still occur at the control plane and application layer. DCO affects the data plane, making Zero Trust VPN tunnels faster and more scalable — essential when the VPN is part of a broader Zero Trust stack. Put simply:
-
Policy = control plane (unchanged)
- Performance = data plane (improved)
Faster tunnels mean better responsiveness and capacity when enforcing Zero Trust access controls in high-traffic environments.