Voice Quality That Users Feel: Reading MOS, Jitter And Loss In Control Hub

Updated: Aug 18, 2025

woman on video conference
Reading Time - 4 mins

Start With The Calls Users Complain About 

A hiring panel in Bengaluru hears robot-like audio during final rounds. Sales in Noida gets silence after the transfer. Support in Pune sees long pauses before a voice comes through. These are not vague gripes; they are symptoms you can measure and fix. Webex Control Hub already captures the right signals. Your job is to set targets, read the traces, and turn noise into a short list of actions. 

What MOS Means, And What It Does Not 

Mean Opinion Score (MOS) is a five-point scale for perceived audio quality. One is bad, five is excellent. It is simple to explain to non-engineers, which makes it useful in reviews. MOS is not a protocol metric; it is a modelled outcome. Treat it as the result of the path your packets took, not a setting you can tweak. Operators often aim for four or better as a steady state. MOS sits on top of network facts, so you still read loss, jitter, and latency to find the cause.  

Jitter, Loss, Latency: Which One Hurt Your Users 

Loss drops packets and creates gaps that no codec can hide for long. Jitter makes arrival times uneven and forces buffers to guess. Latency adds delay that people notice on handoffs and double-talk. You cannot fix all three in one stroke. Read them in context. 

  • Loss above one per cent during peaks often aligns with WiFi contention, ISP congestion, or a busy local gateway. Look for patterns by site and by hour. 
  • Jitter spikes on shared WiFi, roaming between access points, or when buffers run small. You will see it vary within a single call. 
  • Latency stays high on long paths, VPN hairpins, or when optimisation fails. Expect latency to be lowest on ICE optimised onnet calls. 

How Control Hub Grades A Call 

Webex Control Hub shows both the user view and the hop view. A call leg is marked good when end-to-end thresholds sit under the defined limits. The hop view grades the segment between the cloud and the device. These guardrails tell you where to look first and when to act. 

  • Endpoint leg good: packet loss under five per cent, latency under four hundred milliseconds, jitter under one hundred fifty milliseconds 
  • Cloud hop good: packet loss under two point five per cent, latency under two hundred milliseconds, jitter under seventy five milliseconds 

These thresholds come from Webex Calling analytics and troubleshooting in Control Hub. They also explain why a call can feel poor even when one side looks fine. 

A One-Page Service Level Objective (SLO) For Voice 

Publish a page that leaders can read in two minutes. 

  • Availability: uptime by site and time to restore 
  • Experience: median MOS, jitter, loss by site and device 
  • Change quality: success rate, rollbacks, incidents tied to change 
  • Operations: queue answer times and after-hours handling 
  • Hygiene: patch currency and open advisories 

Set breach thresholds, route alerts to owners, and review trends weekly. Use before and after snapshots to prove each fix. 

From Metric To Action: A Troubleshooting Playbook 

Pattern 1, many poor legs on WiFi 

Look for retries and low SNR on the access layer. Prioritise voice, fix roaming, and remove channel overlap. If the branch has Meraki, use Wireless Health and client timelines to match events. 

Pattern 2, off-net calls only 

If on-net calls look good but PSTN calls dip, inspect the carrier path. Check Local Gateway health, trunk errors, and route groups. Validate number porting and temporary forwards during cutover. 

Pattern 3, one user always poor 

Endpoint firmware, home WiFi, or VPN hairpin often explains it. Compare send and receive legs, replace the client, then move to path optimisation. 

Pattern 4, bursty jitter during peak 

Inspect WAN QoS and bandwidth policy for collaboration traffic. Move large file sync out of business hours. Confirm no double shaping between SDWAN and ISP. 

Pattern 5, branch link looks fine, but users complain 

Open the hop view. If the cloud hop is green but the endpoint leg is red, fix the last thirty metres first: switch port, cabling, or the client NIC. 

Alerts You Actually Need 

Alert fatigue kills response time. Keep a short list. 

  • Call legs with MOS below the target for ten per cent or more of the minutes in a day 
  • Site with packet loss above one per cent for any fifteen-minute window 
  • Jitter above thirty milliseconds on the 90th percentile for a site 
  • Failure codes that spike after a change window 
  • ICE optimisation that drops below an agreed share of on-net calls 

Alerts should open tickets with context. Include user, site, IP, ISP, codec, and screenshots where useful. Close the loop in the weekly review. 

A Cutover You Can Defend 

Days 1 to 30 

Inventory numbers, trunks, and devices. Write site codes and E.164 rules. Set your SLO and alert plan. Pick a pilot with clear sponsorship. 

Days 31 to 60 

Build the pilot. Port or claim numbers. Validate inbound and outbound, queues, voicemail, and executive assistant. Watch MOS, jitter, and loss in the Control Hub. Fix, template, and sign off. 

Days 61 to 90 

Move in waves. Add quality alerts. Run the weekly scorecard. Tune bandwidth policy for collaboration and contact centre paths. Close documentation and hand over runbooks. 

What To Ask Your Operator 

  • Show a live demo of Control Hub with real call legs and hop views 
  • Share the change template, test plan, and rollback criteria 
  • Commit to response and restore timers for P1 and P2 incidents 
  • Provide sample RCAs with evidence, not summaries 
  • Prove carrier escalation paths and number porting timelines 

Where Proactive Fits 

Proactive runs Webex Calling like an operator. We set dial plan standards for India, run changes with real tests and rollbacks, and close incidents with evidence. You keep policy and oversight. We keep the roster, the runbooks, and the 24×7 watch. 

Make The Next Move 

Start with one site, a fixed change window, and a scorecard that reports the gains. Standardise the dial plan. Run the pilot. Stabilise, then scale. 

Contact Us

We value the opportunity to interact with you, Please feel free to get in touch with us.