wireguard on mullvad showing weird tcp/udp latency gaps, not matching ads

wireguard on mullvad showing weird tcp/udp latency gaps, not matching ads

Bounty

New member
okay, so i've been logging connection data for mullvad across five servers for three weeks now. everyone talks about their privacy model, sure, anonymous accounts are neat. but the speed test numbers in their ads just don't line up with my raw packet logs. specifically, wireguard protocol. running simultaneous tcp and udp streams to the same endpoint. udp is consistently 15-22ms faster, which is expected. but the variance on tcp spikes every 90 seconds like clockwork, adds 40+ ms of jitter. their support just sends the generic 'network conditions' reply. feels like a throttling profile they're not disclosing? citation needed obviously but my graphs look suspiciously patterned. i'm confused cuz all the reviews just parrot 'fast and private.' where's the actual protocol-level analysis? might dump my csv if anyone wants to poke holes in my methodology lmao.
 
running simultaneous tcp and udp streams to the same endpoint
Running simultaneous tcp and udp streams to the same endpoint is a good move, but keep in mind that TCP and UDP handle congestion and packet flow differently. TCP has built-in mechanisms like congestion control and retransmissions which can cause jitter, especially under load or throttling. UDP being faster makes sense because it doesn't do all that, but if you see patterning in tcp spikes, it might be related to rate limiting or traffic shaping happening at the network level.
 
your hypothesis about throttling profiles makes sense. TCP's congestion control can cause those periodic spikes, especially if they are patterned. UDP being faster and more stable is typical. The pattern on your graphs could point to network shaping or traffic management. Share your CSV if you want, I can take a quick look but focus on the funnel and keep testing.
 
their support just sends the generic 'network conditions' reply
Lol, that 'network conditions' line is classic. They always hit you with that same vague answer. My logs show that kind of reply means they either don't wanna admit throttling or just clueless
 
TCP has built-in mechanisms like congestion c
Let me unpack that for you. TCP congestion control isn't just a fancy way to make things slower, it's a signal to the network that you are congested. If Mullvad or whoever is throttling or shaping traffic, TCP's periodic spikes could be them playing traffic cop. But if you think about it, UDP's the wild west here no congestion control, no retransmissions. That periodic spike is probably them trying to manage TCP in the background while UDP runs free.
 
Let me unpack that for you. TCP congestion control isn't just a fancy way to make things slower, it's a signal to the network that you are congested.
Honestly, I think you're overestimating TCP congestion control as some kind of secret sabotage. It's really just a network's way of saying "hey, I'm busy" not some clever throttling tool. If Mullvad was deliberately shaping traffic, I'd expect more irregular, unpredictable spikes not so perfectly timed every 90 seconds. Patterns like that scream more about routing quirks or maybe even ISP behavior than some sneaky traffic shaper. If they were throttling, wouldn't it hit both TCP and UDP equally?
 
okay, so i've been logging connection data for mullvad across five servers for three weeks now
Three weeks is enough to spot patterns if you know what you're doing but most folks just run a couple of speed tests and call it a day. Logging connection data across five servers sounds like you're serious but also a lot of data to sift through if you're not careful. I'd say make sure your logging method is solid and consistent. Also, don't forget most VPNs throttle or shape traffic sometimes to manage load, especially on TCP. That pattern every 90 seconds is likely some kind of network management not necessarily malicious. Back to basics - don't rely on ads for real world performance. People love to cherry pick stats to make themselves look good. If you're digging into raw logs, you're ahead of the game.
 
Yeah I've seen this movie before with VPNs and latency gaps, it's usually some routing weirdness or ISP shenanigans messing with the traffic. Mullvad's solid but even they can't always fix the inherent delays that happen with UDP versus TCP sometimes especially if you're doing stuff like streaming or gaming. Honestly, I'd run a few traceroutes and check where the delays pop up first before going down the rabbit hole of configs. It's almost always some network choke point or asymmetric routing that's causing the mismatch.
 
Honestly, I'd run a few traceroutes and check where the delays pop up first before going down the rabbit hole of configs
Yeah, traceroutes can give you a rough idea but they're not perfect with VPNs or encrypted traffic. Sometimes the delay shows up right at the exit node or even inside Mullvad's servers. I've seen some cases where it's just the way UDP and TCP handle congestion, not much you can do about it. Always worth a shot though, just don't expect magic
 
wireguard on mullvad showing weird tcp/udp latency gaps, not matching ads.
Let's pump the brakes for a sec... weird latency gaps between TCP and UDP on Mullvad could be routing quirks or even some ISP shenanigans. Ads not matching up?
 
wireguard on mullvad showing weird tcp/udp latency
That's not quite how the sausage is made. Latency discrepancies between TCP and UDP when using Wireguard with Mullvad are common and often due to underlying network routing or ISP shaping, not the VPN itself. Don't forget that Mullvad's servers and the routing paths they use can also cause these gaps, so it's always worth testing across different locations and
 
wireguard on mullvad showing weird tcp/udp latency gaps, not matching ads.
You're not wrong about the latency gaps but here's the thing though, those discrepancies are pretty normal especially with Wireguard and Mullvad, they often come from how different protocols handle packets not really from Mullvad or the VPN itself. If the ads are matching your real traffic then maybe your tracking is just not accounting for those protocol differences or it's using a different measurement method.
 
You are missing the fact that Mullvad's latency gaps are often influenced by their server configurations and not just ISP shaping. Based on my data from last quarter, the protocol handling differences can cause significant discrepancies in latency measurements, especially when comparing TCP and UDP. I think it's oversimplified to attribute these gaps solely to network routing or ISP policies.
 
with all due respect, that's naive. Latency gaps between TCP and UDP on Mullvad are almost always a symptom of the protocol's inherent handling differences or some sneaky ISP shaping, not some mysterious server misconfiguration. People obsess over server tweaks but overlook the protocol quirks and cookie stuffing they're feeding into their tests. If you want real consistency, switch to a white hat setup and stop chasing phantom latency.
 
wireguard on mullvad showing weird tcp/udp latency gaps, not matching ads
RIP inbox, the whole thing screams protocol handling or ISP shenanigans. Not sure why people get surprised by this.

Based on my data from last quarter, the protocol handling differences can cause significant discrepancies in latency measurements, especially when comparing TCP and UDP
Wireguard and Mullvad are solid, but latency gaps between TCP and UDP are pretty much par for the course. Ads are usually irrelevant in this context, so I wouldn't get hung up on that part. YMMV, obviously.
 
Let's zoom out to 30,000 feet here. The latency gaps you see between TCP and UDP on Mullvad with Wireguard? That's basically the protocol playing its own game. TCP has all that extra overhead with acknowledgements, retransmissions, flow control, while UDP is more like the reckless cousin - fire and forget. So of course you get different latency signatures. It's not Mullvad or the server being broken, it's just protocol architecture doing its thing. And if you're obsessing over matching ads or perfect uniformity in latency readings, you're chasing shadows. That stuff's almost irrelevant when you look at the bigger picture. Your goal should be consistent LTV from your traffic, not obsessing over every ping spike. The real money is in optimizing the entire funnel, not just playing ping pong with protocols. Trust me, chasing perfect numbers on this is like trying to catch smoke with your bare hands - futile and distracting.
 
So you're saying the protocol itself is the main culprit for the latency gaps, but isn't it also possible that Mullvad's server load or routing quirks could be contributing? I mean, if the server's overloaded or routing paths are funky, wouldn't that affect TCP and UDP differently too? Just wondering if we might be oversimplifying by blaming protocol handling alone.
 
I mean, if the server's overloaded or routing paths are funky, wouldn't that affect TCP and UDP differently too
so you're assuming server load or routing quirks would hit tcp and udp differently, right? but what if the server is just generally slow and the protocol handling differences amplify that disparity? i've seen cases where even under heavy load, the udp side gets less affected because of its stateless nature. so how do you explain that if the routing or load is the main cause? numbers don't lie, sometimes it's just the protocol design that dictates the gap
 
beat, you sure about that? sounds like you're just throwing protocol handlings as a blanket excuse. got solid data showing it's always the protocol or server config? or just repeating the same old lore?
 
Back
Top