Networking & Diagnostics16 min read

    How to lower ping and reduce high latency: a practical guide

    By DanaServer Monitoring & Linux
    Share

    The honest answer to "how to lower ping" is: find which hop between you and the server is adding the latency, then fix that specific hop. Almost every guide on the internet leads with "use a wired connection" and stops there — but if your latency lives in the ISP backbone or on the server side, no amount of cable swapping will help. The fix has to match the failure layer, and identifying that layer is a five-minute job once you know which commands to run.

    This guide walks through the actual diagnosis of high latency (ping, traceroute, mtr), the fixes that work at each layer (client device, Wi-Fi, LAN, ISP, transit, server), and the things that look like fixes but aren't. It's written for both end users (the gamer or video-conferencer trying to lower ping in the next 30 minutes) and operators (the engineer trying to figure out why p95 latency on a service jumped 80 ms).


    What "ping" actually measures

    A ping is a tiny ICMP echo request and the matching reply. The number you see is round-trip time (RTT) — the time the packet took to travel to the destination and come back. It does not measure:

    • One-way latency (forward and reverse paths can be asymmetric).
    • Throughput / bandwidth.
    • Packet loss directly (though ping reports lost packets separately).
    • TCP connection setup or TLS handshake time (which add their own latency on top of ping).

    The components of a real RTT, summed up:

    • Client OS scheduling + NIC — usually < 0.5 ms on a modern PC, ~1–3 ms on a phone.
    • Local network (Wi-Fi or wired) — wired Ethernet: < 1 ms; Wi-Fi: 2–30 ms depending on signal/channel.
    • ISP last mile — fiber: 1–5 ms; cable: 5–20 ms; DSL: 15–50 ms; cellular 4G: 30–80 ms; 5G: 8–30 ms.
    • ISP backbone + transit — varies wildly; 5–80 ms within a country, 50–200 ms intercontinental.
    • Server-side hop — usually < 1 ms in datacenter, longer if behind load balancers.

    Knowing these baselines makes diagnosis fast: a 250 ms ping to a server in your own city is not a normal ping. A 180 ms ping from Europe to a US-east server is normal physics.

    A good rough target for "low ping":

    • Gaming: under 50 ms is comfortable; under 30 ms is competitive.
    • Video calls: under 150 ms one-way (~300 ms RTT) is acceptable; under 100 ms is good.
    • General browsing / SaaS: under 100 ms for the first byte feels instant; 100–250 ms feels normal; > 500 ms feels slow.

    If you're already at 30 ms RTT to your gaming server, there is nothing to "lower" — the speed of light is the speed of light.


    Step 1 — Measure baseline ping

    Before changing anything, get a clean number:

    # Linux / macOS
    ping -c 20 example.com
    
    # Windows
    ping -n 20 example.com
    

    Sample output:

    20 packets transmitted, 20 received, 0% packet loss, time 19023ms
    rtt min/avg/max/mdev = 18.412/22.150/29.804/2.871 ms
    

    Four numbers worth noting:

    • avg — your typical latency. This is "your ping".
    • min — the floor. The best you'll ever see; it's bounded by physics.
    • max — the worst spike. Big gap between avg and max means jitter.
    • mdev (Linux) / standard deviation — jitter. High jitter is worse than high latency for real-time apps; voice calls and games tolerate steady 80 ms much better than fluctuating 20–120 ms.
    • packet loss — 0% is the only acceptable number on a wired connection. > 1% indicates a real problem somewhere.

    You can run a continuous ping in a separate terminal while you make changes — it's the fastest way to see whether each change actually helped.

    If you'd rather run this from a browser-based tool that pings from cloud nodes (useful when your machine itself is the suspect), use the Ping tool.


    Step 2 — Find which hop is adding the latency

    Total ping is a sum. To know which segment is responsible, run a traceroute (or better, mtr):

    # Linux / macOS — basic
    traceroute example.com
    
    # Linux / macOS — better (combines traceroute + ping per hop)
    mtr -rwbzc 50 example.com
    
    # Windows — basic
    tracert example.com
    
    # Windows — better
    pathping example.com
    

    mtr is the right tool for this — it pings every hop in the route 50 times (with -c 50) and reports loss and latency per hop. Sample output:

    HOST: client                       Loss%   Snt   Last   Avg  Best  Wrst StDev
      1.|-- 192.168.1.1                 0.0%    50    1.0   1.1   0.9   2.0   0.2
      2.|-- 10.0.0.1                    0.0%    50    8.4   8.5   7.9  11.0   0.5
      3.|-- isp-edge.example.net        0.0%    50   12.0  12.3  11.8  14.2   0.5
      4.|-- isp-core1.example.net       0.0%    50   13.5  13.6  13.1  15.0   0.4
      5.|-- ix-100.he.net              80.0%    50   95.2  98.4  85.0 120.5  10.2  ← loss + latency spike here
      6.|-- core1.aws.net               0.0%    50   95.8  96.1  92.5  99.0   1.2
      7.|-- example.com                 0.0%    50   97.5  97.8  94.2 100.3   1.0
    

    What to look for:

    • Where loss starts — column Loss%. Loss starting at hop N and continuing afterwards usually means hop N is the troubled one (or its upstream).
    • Where latency jumps — a 5 ms hop followed by a 95 ms hop. Sometimes that's just the geographic crossing (the next hop is in a different country), and the latency stays high for the rest. Sometimes it's a congested or misrouted hop.
    • Asymmetric loss — some intermediate hops show high loss but the destination shows 0%. That usually means the hop is rate-limiting ICMP, not dropping real traffic. The hops where loss persists to the destination are the ones that actually matter.

    If you'd rather run this from cloud nodes (to compare against your local machine's view), use the Traceroute tool or the MTR tool.

    Once you've identified where the latency lives, the fix is at that layer.


    Layer 1 — Your device

    The fastest checks before anything else:

    • Close bandwidth-heavy apps. Cloud backups, OS updates, video streaming, torrent clients, even Slack/Discord syncing media all share your link. nethogs (Linux), Activity Monitor → Network (macOS), or Task Manager → Performance → Network (Windows) shows per-process traffic.
    • Restart the network stack. Almost the lowest-effort, highest-yield fix. On Windows: ipconfig /release && ipconfig /renew && ipconfig /flushdns. On Linux: sudo systemctl restart NetworkManager or sudo dhclient -r && sudo dhclient. On macOS: sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder.
    • Check for background CPU saturation. A pegged CPU adds latency to outgoing packets because the kernel can't schedule the network stack quickly. top / htop (Linux/macOS), Task Manager (Windows). If a single process is at 100%, fix that first.
    • Update the network driver (laptops especially). Outdated Wi-Fi drivers are a surprisingly common cause of inflated latency and jitter.

    For more on identifying which process is using bandwidth or CPU, see the existing KB on process inspection on Ubuntu (commands work the same on most Linux distros).


    Layer 2 — Wi-Fi (and why "wired is better" is real)

    Wi-Fi is the single biggest source of avoidable latency for most users at home. The numbers:

    • Modern Wi-Fi 6 / 6E, strong signal, no congestion: 2–6 ms added latency.
    • Wi-Fi 5 (802.11ac), decent signal: 5–15 ms added latency.
    • Older Wi-Fi 4 (802.11n) or weak signal: 15–50 ms, sometimes much more.
    • Congested 2.4 GHz band (apartment building): 30–200 ms with high jitter.

    Quick wins, in order of effort:

    1. Move to 5 GHz if you're on 2.4 GHz. Most home routers expose two SSIDs; the 5 GHz one is usually labelled <name>-5G. Do this on the device that's complaining about ping, not just the router.
    2. Get closer to the router, or move the router. Wi-Fi signal degrades roughly with the square of distance and is heavily attenuated by walls (especially concrete and metal).
    3. Change Wi-Fi channel. On 2.4 GHz, channels 1, 6, and 11 are the only non-overlapping ones; use whichever your neighbours aren't on. On 5 GHz there are many more options. Tools like Wifi Analyzer (Android) show the local channel landscape.
    4. Disable band steering if your router supports it but is making bad choices (sticking to 2.4 GHz when 5 GHz is available). Most routers let you split SSIDs so you can pick.
    5. Switch to wired Ethernet. Single biggest improvement. Even powerline adapters or MoCA (over coax) typically beat Wi-Fi for latency. For competitive gaming or production-quality video calls, wired is the only reliable answer.

    The "wired matters" effect is real and quantifiable: in a typical home with a Wi-Fi 5 router, switching from 5 GHz Wi-Fi to wired Ethernet shaves 5–15 ms off RTT and eliminates almost all jitter. That's enough to flip "frustrating" to "fine" for gaming.


    Layer 3 — Your LAN and router

    Past the Wi-Fi, the LAN side has its own surprises:

    • Reboot the router. Yes, really. Cheap consumer routers leak memory, build up stale connection-tracking state, and develop persistent latency over weeks of uptime. A monthly reboot is not a meme — it's a workaround for firmware that you can't fix any other way.
    • Update router firmware. Manufacturers ship security and stability fixes; many of them affect latency under load.
    • Check the router's CPU. If the router has a status / advanced page, look at CPU load. A maxed-out router CPU (caused by streams of small packets, lots of devices, an overloaded iptables/QoS rule set) tanks latency for everything.
    • Disable consumer "QoS" / "anti-bufferbloat" features unless you've configured them deliberately. Default QoS on cheap routers often adds latency rather than removing it.
    • Enable smart-queue management (SQM / fq_codel / cake) if your router supports it. This is the real anti-bufferbloat fix and works very well — OpenWrt routers and some pfSense/OPNsense setups support it natively. The improvement is dramatic on connections with high bandwidth and high bufferbloat.

    To check whether bufferbloat is a problem on your link, run a saturated-link test: start a continuous ping in one terminal, then start a heavy upload in another (e.g. iperf3 -c <speedtest-server> -t 60). If your idle ping is 20 ms and your saturated ping is 300 ms, you have bufferbloat — SQM is the fix.


    Layer 4 — Your ISP

    Past your router, you're on the ISP. Things you can change:

    • Switch Wi-Fi → Ethernet on the ISP's gateway too — some "fiber" plans use a hybrid where the ONT-to-gateway Wi-Fi link adds latency.
    • Move from cellular to fixed-line if possible. Even fast 5G has higher latency than fiber.
    • Switch ISP plan or carrier. Some ISPs have noticeably better peering (lower transit latency to common destinations) than others. The clue: traceroute hops 3–6 are inside your ISP. If those hops are slow or have high variance, the ISP is the cause.
    • Open a ticket with the ISP if you have packet loss inside their network. Loss in their AS is their problem to fix; the support process is annoying but the fix usually exists. Bring your mtr output.

    What you can't change: physics. A ping to a server 8,000 km away can never be lower than ~80 ms RTT no matter how good your ISP is. If your latency is dominated by geographic distance, the only real fix is to use a server closer to you (or move closer to the existing server, which is typically out of scope).


    Layer 5 — VPN: usually adds latency, sometimes reduces it

    A VPN routes your traffic through an extra server before it reaches the destination. For most users, this adds latency. For a specific minority, it reduces it.

    A VPN reduces latency when:

    • Your ISP is routing you suboptimally (sending US-bound traffic through Europe, for example). A VPN with a server near the destination can short-circuit the bad routing.
    • Your ISP is throttling specific traffic (some ISPs throttle gaming or streaming flows). The VPN hides the traffic class.
    • The destination's anti-DDoS layer is closer to a VPN node than to your ISP's egress.

    For everyone else, a VPN adds 5–50 ms of overhead and you should turn it off when latency matters.

    To test: ping the destination with the VPN on, then with it off. Whichever is lower is the right answer for that destination.


    Layer 6 — The server side (for operators)

    If you operate the server and your users are reporting high ping, the cause is rarely the user's network — it's usually:

    • The server is geographically far from the users. A US-east server is going to feel slow for users in Singapore. Multi-region deployment, anycast, or a CDN in front (for static and cacheable content) is the real fix.
    • The server is overloaded. Even a 5-ms-physical-RTT server can show 300 ms ping if the kernel is overwhelmed. CPU near 100%, network queue saturation (netstat -s | grep -i drop), or context-switch storms all do this.
    • A CDN / load balancer is adding hops. Each hop is another sub-millisecond — usually negligible — except when the CDN PoP nearest the user is the wrong continent. Check CDN's edge presence in the user's region.
    • TCP / TLS overhead, not ping. Users may say "high ping" when they actually mean "slow page load". TLS adds 1–2 RTTs of handshake; HTTP/3 (QUIC) reduces that to 0–1. Enable HTTP/2 or HTTP/3 if you haven't.
    • Slow database / app under load. Application latency is on top of network latency. The user's "ping" to the service includes time to reach the front door; if the door is slow to open, the experience is bad regardless of network. APM is the right tool for this — see the discussion in What is a 504 Gateway Timeout.

    The server-side discipline that pays off:

    1. Continuous network monitoring from multiple regions. A spike in ping from Frankfurt to a Singapore server is invisible from a US-based monitor; you need probes near your users.
    2. Per-region uptime + latency dashboards. Track each region's p50 and p95 ping over time so a slow drift is caught before users notice.
    3. Synthetic checks at the application layer, not just ping. ICMP can be deprioritized by network gear; what matters to users is HTTP request RTT, which a real check simulates correctly.

    Things that look like fixes but aren't

    • "Optimizing" Windows network settings via a registry tweak. TCP autotuning, ACK frequency, and similar legacy knobs were relevant on Windows XP. On Windows 10/11 the defaults are correct; tweaking them rarely helps and sometimes hurts. Don't let YouTube tutorials change registry keys for you.
    • Buying a "gaming router". Most "gaming routers" are regular routers with a slightly better CPU, a louder colour scheme, and a marketing-flavored QoS profile. The actual fix is wired Ethernet + a router with real SQM. Spend the budget there instead.
    • Disabling IPv6. Common advice from a decade ago; usually irrelevant today. If you suspect IPv6 specifically, use ping -4 and ping -6 separately to compare; only if v6 is much worse should you investigate further.
    • Changing DNS server. DNS speed affects how fast a new connection starts (often imperceptibly). It does not affect ping to an already-resolved host. If your "ping" feels slow only when first opening a site, DNS is the cause; otherwise it isn't.
    • "Optimizers" or "internet boosters". Almost all of these are placebo at best; many install adware or proxy traffic through unknown servers. Don't.

    Operational tips

    • Always compare the same destination. Ping to your router (ping 192.168.1.1), to your ISP's first hop, and to the actual remote server give you a layered breakdown — but only if you compare ping to the same hosts before and after a change.
    • Use long enough samples. A 4-packet ping is not enough to tell whether latency is stable. ping -c 100 (or mtr -c 100) gives you a real distribution.
    • Watch jitter, not just latency. A 20 ms ping with ±2 ms is a comfortable connection. A 20 ms average with ±80 ms is going to feel terrible for voice and games. The mdev / stddev field is the one to watch.
    • Monitor both ICMP and HTTP latency. Some networks deprioritize ICMP and your ping results are pessimistic. HTTP-level latency is what users actually experience.
    • Capture before you change. If you're going to change five settings to "fix" latency, you'll never know which one helped. Run an mtr -c 200 dest > before.txt first. Make one change. Run it again. Diff. Repeat.

    Catch latency regressions before users complain

    Lowering ping once is a one-shot win. Keeping it low requires monitoring — ping degrades silently for a hundred reasons (an ISP changed peering, a CDN PoP went offline, a router started leaking memory, a server moved regions) and the first signal is usually a user complaint.

    What good latency monitoring looks like:

    • ICMP ping checks from multiple regions to your service, with alerting on sustained increases (not just one-shot spikes — those are usually transient).
    • HTTP and TCP latency checks from the same regions, since some networks treat ICMP differently from real traffic.
    • Per-region trend tracking so a slow drift from 30 ms to 80 ms is caught before it becomes a complaint.
    • Correlation with server-side metrics (CPU, queue depth) so you can tell "the network got slow" apart from "the server got slow".

    Xitoring's uptime monitoring runs ICMP / TCP / HTTP probes from multiple regions and alerts on the first failure or latency excursion. Pair it with server monitoring on the host to see whether a latency spike correlates with CPU saturation or queue overflow at the same moment — usually the difference between "the network is the cause" and "the host is the cause" answers itself once those two signals share a dashboard.

    For more on the monitoring side specifically, see How to set up ping uptime monitoring (the practical setup) and the broader background piece on what ping monitoring covers. For ad-hoc checks while debugging, the in-browser ping, traceroute, and MTR tools run from cloud nodes — useful when your local machine is the suspect.


    Summary

    How to lower ping, in order:

    1. Measure baseline with ping -c 20. Note avg, max, and jitter (mdev / stddev). Establish whether you have a real problem (high latency, high jitter, packet loss) or just physics (long-distance destination).
    2. Find which hop is the cause with mtr -rwbzc 50 destination. Loss and latency that start at hop N and persist tell you where to look.
    3. Fix at that layer. Device → close apps, restart networking, check for CPU saturation. Wi-Fi → 5 GHz, closer to router, change channel, switch to wired. LAN → reboot router, update firmware, enable SQM. ISP → switch plan / open ticket if loss is inside their AS. Server → multi-region, CDN, app-layer optimisation.
    4. Don't apply blanket fixes. Wired Ethernet is the right answer for Wi-Fi-bound latency. It's the wrong answer for transit-bound latency. Match the fix to the layer.
    5. Monitor continuously. Ping degrades silently. ICMP + HTTP probes from multiple regions catch the regression before users do.
    6. Skip the placebos. Registry tweaks, "gaming routers", "internet booster" apps, disabling IPv6 — almost all are noise. Wired + SQM-capable router + monitoring is the durable setup.

    Latency is a sum of well-understood pieces. Once you know which piece is responsible, the fix is usually small, and most "high ping" mysteries dissolve in front of a single mtr run. The discipline that pays off is measuring before changing anything, and keeping a baseline so the next regression is obvious.