NetworkingNetwork+

Performance Metrics for CompTIA Network+ N10-009

Measuring network performance enables proactive identification of issues before users are impacted. CompTIA Network+ N10-009 tests the key performance metrics that network administrators must track, how to interpret them, and which tools measure each metric. Performance metrics appear throughout the Operations and Troubleshooting domains.

7 min
2 sections · 7 exam key points
1 practice questions

Key Network Metrics

Bandwidth utilization: the percentage of link capacity currently in use. High utilization (>70–80%) is a leading indicator of congestion and performance degradation. Measured via SNMP interface counters or NetFlow. Sustained high utilization indicates the need for link upgrade or traffic optimization.

Latency and RTT: round-trip time measured by ICMP ping. Baseline latency for a local LAN: <1ms. LAN to internet: typically 10–100ms. Satellite: 500–600ms. Increasing latency indicates congestion or routing changes. Jitter (latency variation) is measured separately and impacts VoIP/video.

Packet loss: the percentage of packets sent that do not reach the destination. Any packet loss on a LAN indicates a problem (physical fault, duplex mismatch, congestion). For internet connections, < 1% is acceptable. > 1% impacts TCP performance (triggers retransmissions); > 3% severely impacts VoIP. Measured with extended ping or specialized tools.

Error counters: switch and router interfaces report error statistics — CRC errors (signal corruption / bad cable), runts (frames shorter than 64 bytes — collision artifact in half-duplex), giants (frames larger than 1518 bytes — misconfigured MTU), input/output errors. Increasing error counters indicate physical or configuration problems.

Device Performance Metrics

CPU utilization: high CPU on a router or switch can indicate: routing protocol convergence event, DDoS attack, excessive ACL processing, or failing hardware. Sustained CPU > 80% requires investigation. Memory utilization: insufficient memory causes device instability, route table truncation, or crashes. Monitor free memory trends.

Interface statistics: packets per second, bits per second, error rates, drops. Drops indicate congestion (output queue drops) or policy drops (ACL drops, QoS policing). Interface error counters that increment while the link is up indicate physical problems. Temperature sensors on managed devices alert to thermal issues that cause hardware degradation.

Key exam facts — Network+

  • Bandwidth utilization > 70–80% indicates congestion risk
  • Packet loss > 1% impacts VoIP; LAN packet loss should be near 0%
  • CRC errors = signal corruption — check cables and SFPs
  • Runts = frames < 64 bytes (collision artifact); giants = frames > 1518 bytes (MTU issue)
  • Jitter = variation in latency; impacts VoIP and real-time applications
  • High CPU on network device: check for routing convergence, DDoS, or ACL overload
  • Output queue drops = congestion (more traffic than link can handle at that moment)

Common exam traps

If ping works, the network is performing well

Ping only verifies basic Layer 3 connectivity. High-performance issues — congestion, jitter, packet loss — may not be visible with a basic ping test but significantly impact application performance

Practice questions — Performance Metrics

These questions are representative of what you will see on Network+ exams. The correct answer and explanation are shown immediately below each question.

Q1.A network administrator sees increasing CRC errors on a switch interface. What is the most likely cause?

A.High CPU on the switch
B.A faulty or damaged network cable
C.Incorrect VLAN configuration
D.Insufficient DHCP leases

Explanation: CRC (Cyclic Redundancy Check) errors indicate frames arriving with corrupted data. The most common cause is a physical layer problem: damaged cable, bent or crimped cable, bad SFP transceiver, or a faulty NIC. CRC errors are a Layer 1/2 symptom. High CPU does not cause CRC errors; VLAN and DHCP are unrelated.

Frequently asked questions — Performance Metrics

What is MTU and how does it relate to network performance?

MTU (Maximum Transmission Unit) is the largest packet size that can be transmitted on a link without fragmentation. Standard Ethernet MTU is 1500 bytes. Jumbo frames use 9000-byte MTU for storage and high-performance networks. MTU mismatches cause fragmentation or black-hole routing — packets too large for a link are fragmented (IP) or dropped (IPv6). Path MTU Discovery (PMTUD) detects the maximum MTU across an end-to-end path.

Practice this topic

Test yourself on Performance Metrics

JT Exams routes you to questions in your exact weak areas — automatically, after every session.

No credit card · Cancel anytime

Related certification topics