// Exam Domain Weights (SK0-005)

1. Server Hardware
16%
2. Server Administration
28%
3. Security
20%
4. Networking
17%
5. Disaster Recovery
19%
CH 01

Server Hardware

Obj 1.1
Racking & Enclosures
U (Unit)Standard rack height unit = 1.75 inches (44.45mm). Answer to any "unit of rack measurement" question — the answer is always U, not RU or HU. Rack Widths19-inch and 23-inch are the two standard widths. 19" is by far most common. Common Sizes24U, 42U (data center standard), 48U enclosures. 1U Server1.75" tall. 300–350W power draw. Dense, space-efficient. No optical drive bays. 2U Server3.5" tall. 350–400W. Dual-socket. Room for more drives and expansion cards. 4U Server7" tall. 600–1000W. Quad-socket. Maximum expandability. Tower ServerStandalone upright unit. Like a desktop but server-grade. Not rack-mounted. Good for small offices. Blade ServerThin modules that slide into a chassis. Chassis handles shared power, cooling, and networking. Avg chassis = 4500W; ~320W per blade (14-blade chassis). Blades are hot-swappable. Efficiency advantage: 10 blades in one enclosure share 2 PSUs vs. 10 standalone servers needing 20 individual PSUs — less heat, less power, less space. Ideal for web servers, virtualization hosts, and clustering.
A 1U = 1.75". A 4U = 7". Rack math: a 42U rack can hold 42 × 1U servers, or 21 × 2U servers, etc. This shows up in scenario questions.
Cooling Management
  • Baffle — Inside the chassis. Channels airflow from intake to specific components.
  • Shroud — Outside the chassis or around a component. Directs air externally.
  • Hot Aisle / Cold Aisle — Data center airflow design. Racks oriented so fronts face each other (cold aisle) and backs face each other (hot aisle). Cold air drawn in from front, hot air exhausted from rear. Prevents recirculation of hot air.
  • CRAC Unit (Computer Room Air Conditioning) — Purpose-built AC for server rooms. Sizing formula: IT load (watts) × 1.3 = BTU rating required. IT load = total power consumption of all devices. Size generously — includes body heat from administrators.
  • Environmental Sensors — Temperature, humidity, water detection, door closure, airflow. Communicate via SMTP or proprietary interfaces. Alert before downtime occurs.
  • Liquid Cooling — For high-density, high-heat workloads. More efficient than air for extreme heat loads.
  • PUE (Power Usage Effectiveness) — Total facility power ÷ IT equipment power. PUE of 1.0 = perfect. Google averages ~1.1. A PUE of 3 means the datacenter uses 3× what the servers alone need.
CRAC sizing: IT load × 1.3 = BTU needed. Target temp: 68–71°F. Max range: 50–82°F. Target humidity: 40–60% rH. These exact numbers come from the CertMaster textbook and are exam-testable.
Power Systems
VoltagePressure/force of electricity (V). 110V standard US. 220V for higher-load circuits. 48V common in telecom/datacenter. AmperageVolume/quantity of electricity (A). WattageVolts × Amps = Watts. Total power consumed. Volt-Amperes (VA)Amps × Volts = VA. Total potential draw. Used for UPS and PDU sizing. Actual PowerVA × 0.67 = estimated actual power draw. Accounts for power factor. kWDivide watts by 1,000. 350W = 0.35kW. PDUPower Distribution Unit. Conditions, protects, distributes power. NFPA 80% rule: never exceed 80% of rated capacity. 30A PDU → 24A max. N+1 RedundancyAdd one extra component beyond what's needed. 2 PSUs needed → install 3. One failure = still operational. 2N RedundancyDouble everything. 2 PSUs needed → install 4 on two separate circuits. Highest fault tolerance. UPSUninterruptible Power Supply. Battery backup. Max safe load = VA × 0.8. Provides graceful shutdown time during outages. Single-PhaseStandard 110/120V. Home and small office. Three-Phase277/480V (or 120/208V). Data centers. More efficient power delivery at scale.
Power Connectors
NEMANational Electrical Manufacturers Association. Sets connector standards in the US. EdisonStandard 3-prong grounded or 2-prong ungrounded plug. The outlet in your home. Twist-Locking (NEMA L)Insert and twist to lock. Prevents accidental disconnection. Used in server rooms where vibration or bumping is a risk. Midplane/BackplaneInternal connectors inside blade chassis. Blade slides in and connects to backplane for power and data.
Network Cabling — Copper
CableMax SpeedMax DistanceNotes
CAT5100 Mbps100mLegacy. Not recommended for new installs.
CAT5e1 Gbps100mMost common existing infrastructure.
CAT610 Gbps55m (at 10G) / 100m (at 1G)Common modern standard.
CAT6a10 Gbps100mAugmented CAT6. Full 10G at full distance.
CAT710 Gbps+100mShielded. GG45 or TERA connector.

Cable Types

Straight-ThroughPin 1 → Pin 1. PC to switch, PC to router. Different device types. Most common. CrossoverTransmit ↔ Receive crossed. PC to PC, switch to switch. Same device types direct connection. (Modern switches use Auto-MDIX so crossover is rarely needed now.) Rollover / ConsolePin 1→8, 2→7, etc. (reversed). RJ-45 to DB9/USB adapter. Used to access Cisco device CLI via console port.
Network Cabling — Fiber

Connectors

  • LC — Local Connector. Small form-factor. Most common in modern enterprise. Used with SFP transceivers.
  • SC — Subscriber Connector. Push/pull square connector. Older standard.
  • ST — Straight Tip. Bayonet twist-lock. Older, less common now.
  • MT-RJ — Duplex small connector. Less common.

Fiber Modes

  • Single-Mode (SMF) — 8.3µm core. Long distance (km). Laser light source. Yellow jacket. Used for inter-building / WAN.
  • Multimode (MMF) — 50µm or 62.5µm core. Short distance (meters to ~550m). LED or VCSEL source. Orange or aqua jacket. Used intra-building / data center.

Transceivers

SFPSmall Form-Factor Pluggable. Hot-swappable. 1 Gbps. Single fiber port. SFP+Enhanced SFP. 10 Gbps. Most common in modern servers and switches. QSFPQuad SFP. 40 Gbps (4×10G). Used in spine-layer switches and high-speed server connections. QSFP+40 Gbps. Used in 40GbE infrastructure. QSFP28100 Gbps. Current high-speed standard in data centers.
Server Components — CPU
Socket TypesLGA (Land Grid Array) — Intel. AMD uses different socket standards. Socket determines CPU compatibility. Never mix socket types. Architecturex86 (32-bit), x64 (64-bit), ARM (mobile/embedded). Server+ focuses on x64. L1 CacheFastest, smallest cache. Built directly into each core. Per-core. Typical: 32–64KB per core. L2 CacheLarger than L1, slower. Per-core or shared between pairs. Typical: 256KB–1MB per core. L3 CacheLargest, slowest cache. Shared across all cores on the CPU. Typical: 8–64MB per socket. Multi-SocketServer motherboard with 2 or 4 CPU sockets. Each CPU is a separate physical processor. Used in high-performance servers. NUMANon-Uniform Memory Access. In multi-socket systems, each CPU has "local" RAM it accesses faster, and "remote" RAM (through the other CPU's controller) it accesses slower. OS and hypervisors must be NUMA-aware for best performance. CPU AffinityBinding a process or VM to a specific CPU core or socket. Improves cache performance. Relevant in virtualization tuning.
NUMA is a Server+ exam topic — know that accessing remote memory in a multi-socket system is slower than local memory, and that NUMA-aware software accounts for this.
Server Components — Memory
DIMMDual Inline Memory Module. Standard server/desktop RAM form factor. 64-bit wide data bus. SODIMMSmall Outline DIMM. Laptop and some small-form-factor servers. Not interchangeable with DIMM. ECC RAMError Correcting Code. Detects and corrects single-bit memory errors silently. Required in enterprise servers. Prevents data corruption and crashes from random bit flips. Registered (RDIMM)Has a register chip between the memory controller and DRAM. Allows more DIMMs per channel. Slightly higher latency. Standard in servers. Unregistered (UDIMM)No register chip. Faster but limited to fewer DIMMs per channel. Common in workstations, not servers. LRDIMMLoad-Reduced DIMM. Uses a buffer chip to reduce electrical load on memory controller. Allows even more DIMMs per system than RDIMM. Used in high-density memory configurations. DDR GenerationDDR4 was standard for years; DDR5 is current. Each generation is NOT backward compatible — different slot notch position. Memory ChannelsDual-channel, quad-channel configurations. Populate matching slots symmetrically for maximum bandwidth.
ECC vs non-ECC: ECC corrects single-bit errors, detects double-bit errors. Servers require ECC. Non-ECC is cheaper and faster but unreliable for 24/7 operation. RDIMM vs LRDIMM: same function, LRDIMM allows more capacity per system.
PCIe Expansion Slots
PCIe LaneEach lane is a point-to-point serial link. More lanes = more bandwidth. x1 / x4 / x8 / x16Number of lanes. x16 = 16 lanes (used by GPUs). x8 = common for NICs and RAID controllers. x4 = NVMe SSDs. x1 = basic cards. PCIe 3.0~1 GB/s per lane. x16 slot = ~16 GB/s. Widely deployed. PCIe 4.0~2 GB/s per lane. Double PCIe 3.0 bandwidth. x16 = ~32 GB/s. Current standard. PCIe 5.0~4 GB/s per lane. Emerging. Used in latest NVMe drives and GPUs. Backward CompatibleA PCIe 3.0 card works in a PCIe 4.0 slot at 3.0 speeds. Slots are physically compatible across generations.
A RAID controller in an x8 slot physically fits in an x16 slot and works fine — the card runs at its rated x8 speed. Physical slot size does not have to match card lane count (as long as slot is equal or larger).
Converged Network Adapters (CNA)

A CNA combines a standard NIC and a Fibre Channel HBA (Host Bus Adapter) into a single card. It can carry both standard Ethernet traffic AND Fibre Channel storage traffic on the same physical adapter and cable — enabling FCoE. Reduces the number of cards needed and simplifies cabling in converged infrastructure.

CNAs were a slide that extracted blank in v1. Key point: CNA = NIC + HBA in one card. Used specifically with FCoE to converge storage and network traffic onto a single adapter.
KVM & Console Management
KVM SwitchKeyboard-Video-Mouse switch. One set of input devices controlling multiple servers. Saves physical space. IP KVMKVM accessible over the network. Full remote desktop-like access even during POST or OS failure. Crash CartMobile cart with monitor, keyboard, and mouse for direct local access to a failed server. Last resort when network and OOB management are both down. Serial ConsoleDirect serial (rollover cable) connection to device CLI. Used for initial setup and when all other management options fail.
Safety
  • Rack Balancing — Install heaviest equipment at bottom. Prevents tipping.
  • Floor Load Limitations — Data center floors have weight limits per square foot. Raised floors have specific ratings.
  • ESD (Electrostatic Discharge) — Use wrist straps, mats, and antistatic bags. Ground yourself before touching components.
  • Proper Lifting — Keep weight on centerline, close to body, at waist level. Do not twist. Push rather than pull when possible.
CH 02

Installing & Configuring Servers

Obj 2.1 / 2.3 / 2.5
OS Installation Types
GUI / AttendedInteractive wizard-based install. Simple but slow, requires a human at the keyboard the entire time. Core InstallCLI-only. No GUI. Smaller attack surface, smaller disk footprint, lower memory usage. Preferred for production servers. Managed remotely via SSH or PowerShell remoting. Bare MetalOS installed on empty drive with no existing OS. Requires bootable media (USB, DVD, network). Slipstreamed / UnattendedAutomated using XML answer file (unattend.xml). No human interaction required after launch. Used for mass deployments. PXE BootPre-boot Execution Environment. Server boots from network using DHCP (DORA) + TFTP to download the NBP (Network Bootstrap Program). No local bootable media needed. Imaging / CloningCopy a pre-configured OS image to new hardware. Fastest method for deploying many identical servers. Virtualized InstallOS installed inside a VM on an existing hypervisor. No new physical hardware required.
Windows Server 2022 minimum: 1.4GHz 64-bit CPU · 512MB RAM · 32GB disk (Core). Always check HCL before installing — unsupported hardware = unsupported OS install.
File System Types
NTFSWindows standard since NT. Supports: permissions/ACLs, EFS encryption, journaling, compression, large files (up to 8PB volume). Replaced FAT32. FAT32Legacy. Max file size 4GB. Max volume 32GB (in Windows format tool). Still used on bootable USB drives and older systems. ReFSResilient File System. Microsoft's modern evolution of NTFS. Built-in integrity checking, better resilience. Cannot be used as a boot volume. EXT4Linux default filesystem. Max file: 16TB. Max volume: 1EB. Supports journaling. Not natively readable by Windows. XFSLinux filesystem. Excellent performance for large files. Default on RHEL 7+. ZFSZettabyte File System. Open source (originally Sun). Max file: 16EB. Max volume: 256ZB. Built-in RAID (RAIDZ), snapshots, checksumming, deduplication. Used in TrueNAS. VMFSVMware File System. Used exclusively by VMware vSphere/ESXi for datastore volumes. Max 64TB. Optimized for concurrent VM I/O.
Your TrueNAS runs ZFS — you already know RAIDZ1 = RAID5 equivalent, snapshots are instant, and ZFS self-heals bit rot by comparing data against checksums. That homelab experience directly maps to enterprise ZFS deployments.
Volume & Partition Types
Basic DiskStandard partition table (MBR or GPT). Up to 4 primary partitions (MBR) or 128 (GPT). Default for new disks. Dynamic DiskWindows-specific. Supports advanced volume types (spanned, striped, mirrored) without hardware RAID. Managed by Windows Disk Management. Simple VolumeOne partition on one disk. Standard basic volume. Spanned VolumeSingle volume spread across 2–32 disks. No redundancy, no speed gain. Fails if any disk fails. Striped VolumeSoftware RAID 0 across 2+ disks. Fast, no redundancy. Mirrored VolumeSoftware RAID 1 across 2 disks. Fault tolerant. Requires Dynamic Disk. RAID-5 VolumeSoftware RAID 5 on Windows. Requires 3+ disks. Fault tolerant. Windows Server only. MBRMaster Boot Record. Legacy partition scheme. Max 4 primary partitions. Max 2TB disk size. BIOS boots from MBR. GPTGUID Partition Table. Modern scheme. Max 128 partitions. Supports drives >2TB. Required for UEFI boot. Includes protective MBR for backward compatibility.
Logical Volume Management (LVM) — Linux

LVM adds a layer of abstraction between physical storage and the filesystem, allowing flexible resizing and management of storage without repartitioning.

PV (Physical Volume)The actual physical disk or partition. Lowest level. Created with pvcreate. VG (Volume Group)Pool of storage created from one or more PVs. Created with vgcreate. LV (Logical Volume)Chunk of storage carved from a VG. This is what gets formatted with a filesystem and mounted. Created with lvcreate. PE (Physical Extent)Fixed-size chunk (default 4MB) that LVM uses internally. LVs are made up of PEs.
The LVM hierarchy: Physical Disk → Physical Volume (PV) → Volume Group (VG) → Logical Volume (LV) → Filesystem → Mount Point. LVM allows you to resize LVs on the fly, add disks to VGs, and take snapshots — without unmounting or repartitioning.
Server Roles
Domain Controller (DC)Runs Active Directory. Authenticates users, enforces Group Policy, manages domains. High RAM and fast disk I/O priority. Critical — redundant DCs should always exist. DNS ServerResolves hostnames to IP addresses. High network I/O. Often co-located with DC. Failure breaks most network services. DHCP ServerAssigns IP configurations to clients. Low resource needs. Should be redundant (failover DHCP). File ServerCentralized file storage and sharing. High disk I/O and capacity priority. SMB/CIFS (Windows), NFS (Linux). Web ServerHosts web applications. High network I/O. IIS (Windows), Apache/Nginx (Linux). Database ServerHosts databases (SQL Server, MySQL, Oracle). High RAM and disk IOPS priority. Often benefits most from RAID 10. Print ServerManages print queues and printer sharing. Low resource needs generally. Mail ServerHandles email (Exchange, Postfix). High disk I/O. Uses SMTP (25), POP3 (110), IMAP (143). Application ServerHosts business applications (ERP, CRM). Resource needs vary. May need to be near database server.
Virtualization
Type 1 HypervisorBare metal. Runs directly on hardware with no host OS. Best performance. Examples: VMware ESXi, Microsoft Hyper-V (server), Proxmox, Xen. Type 2 HypervisorHosted. Runs on top of a host OS. Easy to set up, lower performance. Examples: VMware Workstation, VirtualBox, Parallels. HostPhysical machine running the hypervisor. Provides hardware resources. GuestVirtual machine running inside the hypervisor. Thinks it has dedicated hardware but actually shares host resources. vSwitchVirtual switch inside the hypervisor. Connects VMs to each other and to physical NICs. Behaves like a physical switch. Port GroupVMware term for a named configuration on a vSwitch. VMs connect to port groups, which are associated with VLANs. OverprovisioningAssigning more virtual resources than physical resources exist. Works if VMs don't all peak simultaneously. Risky if workloads spike together. Thin ProvisionDisk space allocated on demand. Efficient storage use but can run out unexpectedly if overcommitted. Thick Lazy ZeroedFull space allocated at creation, zeroed on first write. Fast to create. Best for most general VMs. Thick Eager ZeroedFull space allocated AND zeroed at creation time. Slowest to create, best runtime I/O performance. Use for databases and latency-sensitive VMs.
Container-Based Virtualization
ContainerLightweight virtualization at the OS level rather than hardware level. Containers share the host OS kernel — no separate guest OS per container. Much faster to start and more resource-efficient than VMs. Container vs VMVM = full OS per guest (higher overhead, stronger isolation). Container = shared kernel (lower overhead, faster, less isolation). VMs are better for security isolation; containers are better for scalability and speed. DockerMost common container platform. Packages applications with all dependencies into a portable image. Runs on Linux or Windows. Kubernetes (K8s)Container orchestration platform. Manages deployment, scaling, and failover of containers across multiple hosts. Container ImageRead-only template used to create containers. Stored in a registry (Docker Hub). Images are layered — each layer adds changes on top of the previous.
Containers = OS-level virtualization (shared kernel). VMs = hardware-level virtualization (separate OS per guest). The exam may call containers "application virtualization" or reference Docker/container-based deployments as a "hybrid" virtualization approach.
Cloud Models

NIST 5 Characteristics of Cloud Computing

On-Demand Self-ServiceResources provisioned by the consumer without involving the CSP. No phone calls, no wait. Broad Network AccessServices available over the network from any standard device (phone, tablet, laptop, server). Resource PoolingCompute resources pooled and dynamically allocated across multiple tenants. Multi-tenant model. Rapid ElasticityResources scale up and down on demand. Consumers pay for what they use at any given time. Measured ServiceResource utilization monitored, controlled, and billed based on actual use. Pay-as-you-go.
CompTIA expects you to name all five NIST cloud characteristics. Mnemonic: O-B-R-R-M — On-demand, Broad access, Resource pooling, Rapid elasticity, Measured service.

Deployment Models

Public CloudResources shared among multiple tenants. Owned and operated by CSP (AWS, Azure, GCP). Cheapest, least control. Private CloudDedicated to one organization. On-premises or hosted. Most control, highest cost. Hybrid CloudMix of public and private. Sensitive workloads stay private, scalable workloads burst to public. Community CloudShared by organizations with common concerns (government, healthcare, finance). Managed by members or a provider.

Service Models

IaaSInfrastructure as a Service. You manage OS and up. Provider manages hardware, networking, virtualization. Target audience: sysadmins. Example: AWS EC2, Azure VMs. PaaSPlatform as a Service. Provider manages OS and runtime. You manage application and data. Target audience: developers, DBAs. Example: Azure App Service, Google App Engine. SaaSSoftware as a Service. Provider manages everything. You just use the app. Target audience: end users. Example: Office 365, Salesforce, Netflix.

Shared Security Model

CSP ResponsibilitySecurity of the cloud. Physical data center security, hardware availability, hypervisor security. The CSP owns and protects the infrastructure. Customer ResponsibilitySecurity in the cloud. Data security, access management, OS patching (in IaaS), application security. You own and protect your data and configs.
Shared security model: CSP secures the cloud infrastructure. YOU secure your data and workloads inside it. This line shifts depending on IaaS vs PaaS vs SaaS — the more managed the service, the more the CSP takes on.
Monitoring & Administration
Uptime / Nines99.9% = 8.76 hrs/yr downtime · 99.99% = 52.6 min/yr · 99.999% = 5.3 min/yr · 99.9999% = 31.5 sec/yr BaseliningDocument normal performance metrics (CPU, RAM, disk I/O, network). Compare current against baseline to detect problems or plan capacity. Thresholds & AlertsSet alert triggers for when metrics exceed acceptable ranges. Proactive rather than reactive monitoring. IOPSInput/Output Operations Per Second. Key storage performance metric. SSDs: hundreds of thousands. HDDs: hundreds. Event LogsWindows: Event Viewer (System, Application, Security logs). Configure: retention policy, shipping to SIEM, alerting. SNMPSimple Network Management Protocol (port 161). Used to monitor network devices and servers. SNMP traps send alerts on events.
Storage Management
FormattingCreates a file system on a partition. Choose file system type (NTFS, ext4, ZFS, etc.) appropriate for the OS and workload. PartitioningDividing a disk into logical sections. Each partition can have a different file system and purpose (OS, data, swap). ProvisioningAllocating storage capacity to servers or volumes. Thin provisioning allocates on demand. Thick provisioning reserves space up front. Disk QuotasLimits on how much disk space a user or group can consume. Prevents any single user from filling shared storage. Configured per volume in Windows and Linux. CompressionReduces size of stored data by encoding it more efficiently. NTFS supports per-file and per-folder compression. Trade-off: saves space but adds CPU overhead for read/write operations. DeduplicationIdentifies and eliminates duplicate copies of data. Only one copy of identical data blocks is stored; other locations point to the same block. Can be post-process (after write) or inline (checked before write). Significant space savings in virtualized environments and backup storage. Page / Swap / ScratchVirtual memory overflow space on disk. Windows = page file. Linux = swap partition or swap file. Scratch = temp workspace for applications (video editing, databases). Location and size should be configured intentionally — placing on a fast dedicated disk improves performance. Data Transfer — SCPSecure Copy Protocol. Uses SSH (port 22) to securely copy files between systems. Syntax: scp source user@host:/destination. Encrypted — preferred over FTP for sensitive data. Data Transfer — RobocopyRobust File Copy. Windows command-line tool. Handles retries, copies directory structure, ACLs, timestamps. Syntax: robocopy source destination [options]. Ideal for large migrations and synchronization.
Deduplication vs compression: Dedup removes duplicate data blocks across the storage system. Compression shrinks individual files/blocks. Both save space but through different mechanisms. Dedup is most effective where many similar files exist (VMs, backups). Compression is most effective on compressible data (text, logs, databases).
CH 03

Server Maintenance

Obj 1.3 / 2.8
Out-of-Band (OOB) Management

OOB = managing a server using a channel completely separate from the production network. Works even when the server is powered off, OS is crashed, or network is down.

IPMIIntelligent Platform Management Interface. Industry-standard spec for OOB management. Uses a dedicated BMC chip on the motherboard. Communicates over a dedicated management network port. BMCBaseboard Management Controller. The microcontroller chip that implements IPMI. Has its own firmware, NIC, and power (runs on standby power even when server is off). The "brain" of OOB management. iLO (HPE)Integrated Lights-Out. HP/HPE's implementation of OOB management. Dedicated Ethernet port. Allows remote KVM, power control, hardware monitoring. iDRAC (Dell)Integrated Dell Remote Access Controller. Dell's equivalent to iLO. Same capabilities: remote console, power management, hardware monitoring. Wake on LANNIC listens for a "magic packet" (Layer 2 broadcast) even when powered off. Can turn on server remotely. Security risk — disable if not needed. IP KVMNetwork-accessible KVM. Full keyboard/video/mouse control over IP. Works before OS loads. Crash CartMobile monitor+keyboard cart for direct local physical access. Used when all remote access fails. Serial ConsoleRollover/console cable from laptop to server's serial/console port. Access CLI during boot or after OS failure.
BMC/IPMI is your dedicated comms channel that bypasses normal radio nets. Your radio goes down? You still have PACE: Primary, Alternate, Contingency, Emergency. BMC is the Emergency channel — always up, separate network, separate power.
Drive Types & Speeds
TypeSpeedAvg LatencyNotes
HDD 5,400 RPM~100 MB/s5.5msConsumer. Archival/NAS storage.
HDD 7,200 RPM~150 MB/s4.2msStandard server/NAS drives (IronWolf Pro).
HDD 10,000 RPM~200 MB/s3msEnterprise performance HDDs. SAS interface.
HDD 15,000 RPM~250 MB/s2msHighest-performance spinning disk. SAS only.
SSHD (Hybrid)VariesVariesSSD cache + HDD capacity. Single unit.
SSD SATA~550 MB/s<0.1msFaster than any HDD. Limited by SATA interface.
SSD NVMe3,000–7,000 MB/s<0.05msPCIe-connected. 5–10× faster than SATA SSD. Current enterprise standard for high-performance storage.
Hot-Swappable Components
  • Hot-swap — Replace while running, zero downtime. Requires hardware and OS/firmware support.
  • Warm-swap — Requires brief power-down or pause but not full shutdown.
  • Cold-swap — Must fully power down before replacement. Most components.
  • Hot-swappable in enterprise servers: Drives (SAS/SATA in hot-swap bays), Power supplies, Fans, Some RAM (in specific enterprise designs).
  • Drive cages/backplanes — The backplane provides power and data to drives and enables hot-swap. A backplane failure = all drives in that bay lose access.
  • Always verify hot-swap capability in documentation before attempting. Not all SATA drives are hot-swappable even if the bay says so.
UEFI / BIOS
BIOSBasic Input/Output System. Legacy firmware. 16-bit. Limited to 2.2TB disks. MBR-only boot. UEFIUnified Extensible Firmware Interface. Modern replacement for BIOS. 64-bit. Supports drives >2.2TB, GPT, Secure Boot, faster POST. Secure BootUEFI feature. Verifies OS bootloader is cryptographically signed by a trusted authority before loading. Prevents unsigned/malicious OS from booting. Must disable for some Linux installs or custom boot scenarios. POSTPower-On Self-Test. Runs at startup. Tests CPU, RAM, storage, video. Beep codes signal failures before OS loads. CMOSComplementary Metal-Oxide Semiconductor. Chip that stores BIOS/UEFI settings (date/time, boot order). Powered by a small lithium battery (CR2032). If battery dies, server loses settings.
Licensing Models
Per-InstanceOne license per installation. Install on 5 servers = 5 licenses. Per-CoreLicensed per physical CPU core. Common in Microsoft Windows Server and SQL Server. A 2-socket server with 16 cores each = 32 core licenses needed. Per-SocketOne license per CPU socket. A dual-socket server = 2 licenses. Per-Concurrent UserMax number of users logged in simultaneously. Cheaper if user count is low and staggered. Per-ServerMax connections to that specific server. Node-LockedLicense tied to a specific device (MAC address or hardware fingerprint). Cannot be moved. Volume / Site LicenseSingle key for multiple systems. Enterprise agreement. Simplifies management. SubscriptionPay recurring fee. License expires if not renewed. Common in SaaS and modern enterprise software. True-UpLicense count validation. Reconciling actual usage against purchased licenses — usually done annually in enterprise agreements. Copyleft (GPL)Open source. Derivative works must also be open source. "Viral" license. Permissive (MIT/Apache)Open source. No restrictions on derivatives. Can incorporate into proprietary products.
CH 04

Storage Technologies & Asset Management

Obj 1.2 / 2.7
RAID Levels — Master Reference
RAIDTypeMin DrivesDrive OverheadFault TolerancePerformanceBest For
RAID 0Striping20 (100% usable)NoneFastest R/WTemp data, video editing
RAID 1Mirroring250% (half wasted)1 drive failureFast read, normal writeOS drives, critical data
RAID 5Stripe + Parity31 drive1 drive failureGood read, moderate writeGeneral purpose. Most common.
RAID 6Stripe + Double Parity42 drives2 drive failuresGood read, slower writeLarge arrays, high rebuild risk
RAID 10Stripe of Mirrors (1+0)450% (half wasted)1 per mirror setFastest R/W + redundancyDatabases, high-performance
JBODJust a Bunch of Disks10 (100% usable)NoneNormalFlexible pooling, no redundancy
Capacity math: RAID 5 = (N−1) × drive size. RAID 6 = (N−2) × drive size. RAID 10 = (N/2) × drive size. Example: RAID 6 with 4×250GB = (4−2)×250 = 500GB usable. Your TrueNAS RAIDZ1 (3×20TB) ≈ RAID 5: (3−1)×20TB = 40TB usable.
Hardware vs. Software RAID

Hardware RAID

  • Dedicated RAID controller card (PCIe).
  • Controller has its own processor and cache (with battery backup).
  • OS sees drives as a single logical volume — doesn't know RAID is happening.
  • Faster, more reliable. More expensive.
  • Disk Duplexing — RAID 1 variant where each mirrored drive has its own separate controller. Protects against both drive failure AND controller failure.

Software RAID

  • OS manages RAID (Windows Storage Spaces, Linux mdadm, ZFS).
  • Uses CPU and RAM from the host — impacts server performance.
  • Cheaper. More flexible.
  • Less scalable for complex RAID configurations.
  • ZFS is technically software RAID but extremely capable — used in TrueNAS.
Capacity Planning Math
Base 2 vs Base 10Drive manufacturers use base 10 (1KB = 1,000 bytes). OS uses base 2 (1KB = 1,024 bytes). A "1TB" drive appears as ~931GB in Windows/Linux. This discrepancy grows larger with bigger drives. 1 KB (base 2)2¹⁰ = 1,024 bytes 1 MB (base 2)2²⁰ = 1,048,576 bytes 1 GB (base 2)2³⁰ = 1,073,741,824 bytes 1 TB (base 2)2⁴⁰ = 1,099,511,627,776 bytes Growth PlanningAccount for: OS files, patches/service packs, application growth, logs, temp files. The Windows component store (winsxs) grows with every update — plan for it. Storage TieringAutomatically moves hot (frequently accessed) data to faster storage (NVMe/SSD) and cold data to slower storage (HDD/tape). Optimizes cost vs. performance. Common in enterprise SANs and ZFS (L2ARC cache, ZIL).
On the exam: if they say a "500GB drive" and ask how much space Windows will show — the answer is less than 500GB (approximately 465GB) because Windows uses base 2 and manufacturers use base 10.
Shared Storage Types
DASDirect Attached Storage. Connected to one server (USB, SATA, SAS). Not shared. Simple, fast, cheap. Cannot be accessed by other servers. NASNetwork Attached Storage. File-level storage over the network. Multiple servers access simultaneously. Uses NFS (Linux/Unix) or CIFS/SMB (Windows). Your TrueNAS = NAS. SANStorage Area Network. Block-level access. Server sees storage as a local disk. Uses Fibre Channel or iSCSI. Fastest and most expensive shared storage. iSCSISCSI commands encapsulated in IP packets. SAN over standard Ethernet. Cost-effective alternative to Fibre Channel. Uses existing network infrastructure. Fibre Channel (FC)Dedicated high-speed storage network. 8/16/32 Gbps. Requires separate FC switches (not Ethernet). Lowest latency, highest cost. FCoEFibre Channel over Ethernet. Encapsulates FC frames in Ethernet. Does NOT use IP (unlike iSCSI). Requires a CNA. Convergence of FC and Ethernet infrastructure. NFSNetwork File System. Linux/Unix file sharing protocol. Mounts remote directories as if local. Layer 7 protocol. CIFS/SMBCommon Internet File System / Server Message Block. Windows file sharing. SMB3 is current. Used for Windows shared folders and NAS access from Windows.
NAS = file-level (you see files/folders). SAN = block-level (you see a raw disk, format it yourself). iSCSI encapsulates SCSI in IP. FCoE encapsulates FC in Ethernet (no IP). This distinction appears in almost every exam. DAS = one server only, no sharing.
Asset Management
Inventory FieldsMake, Model, Serial Number, Asset Tag, BIOS asset tag. Track all of these for every piece of hardware. The BIOS asset tag survives OS reinstalls. Lifecycle StagesProcurement → Usage → EOL (End of Life) → Disposal/Recycling. Server hardware useful life: typically 3–7 years. EOL = no more vendor support or patches. Plan replacement before EOL. ProcurementDefine specs based on role → RFP process → pricing/contract → finalization/shipping. Security requirements (MFA, encryption, patching) should be built into the RFP. Capital LeaseYou own the asset at end of lease term. Appears on balance sheet as an asset. Tax depreciation benefits. Operating LeaseYou return the asset. Treated as ongoing expense (OpEx). Predictable monthly payments. Protection against hardware obsolescence. WarrantyCovers manufacturer defects. Document warranty details during procurement. Know how to escalate to vendor quickly. Service Plan TiersCommon tiers: (1) On-site, 4-hour response same business day. (2) On-site, same business day. (3) On-site, next business day. Higher tier = higher cost = lower RTO. LabelingPort labels, system labels, circuit labels, patch panel labels. Everything labeled = faster troubleshooting. Change ManagementFormal process: Request → Review → Approve → Implement → Document. No unauthorized changes. Every change documented including rollback plan.
Server useful life = 3–7 years. Windows Server OS support lifecycle = ~12 years after initial release. A server may reach EOL before the OS does — evaluate hardware capability, driver support, and vendor support together.
Business Impact & Service Metrics

These metrics are directly tested on Server+ and appear in the Ch4 slides under Company Policies and Procedures. Know all six.

BIA (Business Impact Analysis)Identifies which systems and processes are most critical to operations and quantifies the impact of their disruption. Performed FIRST — everything else (DR sites, backup frequency, SLAs) flows from BIA findings. MTBF (Mean Time Between Failures)Average time a component operates before failing. Higher MTBF = more reliable. Example: a drive with MTBF of 100,000 hours is more reliable than one rated 50,000 hours. Used in hardware procurement and lifecycle planning. MTTR (Mean Time to Repair)Average time to restore a failed component. Lower MTTR = faster recovery. Drives decisions about spare parts inventory, on-site support contracts, and technician training. RTO (Recovery Time Objective)Maximum acceptable downtime after a disaster. "How long can we be offline?" Determines DR site type (hot/warm/cold) and level of automation required for failover. RPO (Recovery Point Objective)Maximum acceptable data loss measured in time. "How much data can we lose?" An RPO of 1 hour = must back up at least every hour. Determines backup frequency and whether synchronous replication is needed. SLA (Service Level Agreement)Defines expected uptime and performance between provider and customer. Includes financial remedies for unmet metrics. SLA drives how aggressively BIA metrics must be met. Uptime RequirementsDerived from BIA and SLA. 99.9% allows ~8.7 hrs/yr downtime. 99.99% allows ~52 min/yr. The more 9s required, the more expensive and complex the infrastructure.
BIA → determines RTO and RPO → drives DR site choice and backup frequency → all formalized in SLA. MTBF and MTTR inform hardware choices and support contracts. Expect scenario questions where you must identify which metric is being described or which comes first in planning.
Secure Storage of Sensitive Documentation

Sensitive documentation must be protected with the same rigor as production systems. The following types require restricted access, encryption at rest, and access logging:

PII (Personally Identifiable Information)Any data that can identify an individual — name, SSN, date of birth, address, email, phone. Regulated by GDPR, CCPA, and sector-specific laws. Breach notification requirements apply. HR RecordsEmployee performance reviews, disciplinary records, compensation data, medical information. Restricted to HR and direct managers. Legal retention requirements vary by jurisdiction. Financial DocumentsNon-public financial data, budget information, contracts, pricing. Regulated by SOX for public companies. Internal access control lists must be maintained. Trade Secrets & Proprietary MethodsFormulas, processes, designs, source code, and business strategies that give competitive advantage. Loss can cause irreversible financial damage. NDA required before disclosure. Plans and DesignsNetwork diagrams, architecture plans, DR documentation. In the wrong hands these reveal attack surfaces and critical system weaknesses. Store separately from general documentation.
Server+ specifically tests that you know sensitive documentation requires the same security controls as systems — encryption at rest, access control, audit logging, and secure destruction when no longer needed. The slides explicitly list these five categories.
CH 05

Fault Tolerance Requirements

Obj 2.4
Clustering
Active-ActiveAll nodes actively handle workload simultaneously. Load is distributed. Maximum performance and utilization. If one node fails, remaining nodes absorb its load (may degrade performance). Active-PassiveOne node active, one (or more) standby. Passive node sits idle until primary fails. Simpler than active-active but wastes standby capacity. HeartbeatDedicated network connection between cluster nodes. Nodes send "I'm alive" messages continuously. If heartbeat stops, surviving node assumes partner is dead and takes over — this is called a failover. Split-BrainWhen heartbeat link fails but both nodes are actually still running. Both nodes may try to take ownership of resources simultaneously, causing data corruption. Requires a quorum device or tiebreaker to resolve. QuorumMechanism to prevent split-brain. Usually a third resource (shared disk or witness server) that a node must "own" before it can go active. Majority-vote system.
Active-passive = QRF staging. Active-active = two teams simultaneously running missions covering the same AO. Heartbeat = the radio check between them. If you lose comms (heartbeat fails), you need rules for who takes charge — that's quorum.
Load Balancing
Round RobinRequests distributed sequentially (Server 1, Server 2, Server 3, Server 1...). Simple and common. Works best when servers are equal capacity and requests are similar in load. Least ConnectionsNew request goes to server with fewest active connections. Better than round robin for variable-length sessions. WeightedMore powerful servers get proportionally more traffic. Used in mixed-capacity server pools. Most Recently Used (MRU)New request goes to most recently used server (keeps session warm). Less common. Hardware LBDedicated appliance (F5, Citrix NetScaler). Highest performance, highest cost. Software LBHAProxy, Nginx, Windows NLB. More flexible, lower cost, uses server CPU.
NIC Teaming / Link Aggregation
NIC TeamingCombining multiple physical NICs into one logical interface. Provides redundancy (failover) and/or increased throughput. LACP (802.1ax)Link Aggregation Control Protocol. Dynamic negotiation of link aggregation. Both ends must support LACP. Old standard was 802.3ad, renamed 802.1ax-2008. Active-Active TeamingAll NICs carry traffic simultaneously. Total bandwidth = sum of all NICs. If one fails, traffic continues on others. Active-Passive TeamingOne NIC active, others on standby. No added bandwidth. Pure redundancy. STP InteractionSpanning Tree Protocol treats a LACP bond as a single logical link, not multiple links. No STP loop risk from teaming.
Redundant Infrastructure
  • Redundant NICs — Two NICs to two different switches. If NIC or switch fails, traffic continues on the other path.
  • Redundant Power Supplies — Two PSUs on two separate PDUs on two separate circuits from two separate UPS units ideally. Each PSU can run the server independently.
  • Redundant Storage Paths (Multipathing) — Multiple HBA or NIC paths to SAN storage. If one path fails, I/O continues on the other. Software like MPIO (Windows) or DM-Multipath (Linux) manages this.
  • Geo-Redundancy — Infrastructure spread across physically separate locations. Protects against site-level disasters.
CH 06

Securing the Server

Obj 3.2 / 3.4 / 3.5
Physical Security Controls
Access Control Vestibule (Mantrap)Two-door airlock. First door must close and verify entry before second door opens. Prevents tailgating. Required at most data center entrances. BollardsShort sturdy posts preventing vehicle ramming attacks. Common in front of data centers and government facilities. FencingPerimeter deterrent. Height determines effectiveness: 3–4 ft = minimal. 6–7 ft = deterrence (hard to climb). 8+ ft with barbed wire/concertina = strong protection. Taller fencing with outward-angled top maximizes delay time. Security GuardsActive monitoring and response. Can adapt to situations cameras cannot handle. Retention camera policies critical for forensic investigations. Cameras / CCTVDeterrent and detective control. Footage retention policy = how long before it's overwritten. Critical for forensic investigations — set retention based on regulatory and business requirements. Biometric LocksFingerprint, retina, facial recognition. "Something you are" — strongest factor. High security, higher cost. RFID / Smart Card"Something you have." Badge-based access. Immediately revocable. Often combined with PIN for two-factor. Logs every entry and exit for audit trail. SafesSecure physical storage for backup media, encryption keys, and sensitive documents. Fire-rated safes protect against both theft and fire damage. Faraday CageMetal enclosure blocking RF/electromagnetic signals. Prevents wireless attacks and remote device activation. Used in SCIF environments. CPTEDCrime Prevention Through Environmental Design. Lighting, layout, sight lines, and landscaping choices that naturally deter criminal activity without additional security personnel.
Environmental Controls

Fire Suppression

  • Wet Pipe — Water always in pipes. Fastest response. High electronics damage risk. Not preferred for server rooms.
  • Dry Pipe — Air-pressurized. Water released only when triggered. Safer than wet pipe for electronics. Used in cold environments.
  • Pre-action — Requires TWO triggers before water flows (smoke + heat/fusible link). Best for server rooms — prevents accidental activation.
  • Deluge — All sprinklers open simultaneously. High-hazard areas only. Maximum damage risk.
  • Clean Agents (Halon Replacements) — EPA-approved: Argon, NAF-S-III, FM-200. Suppress fire without water or residue. No electronics damage. Safe for occupied spaces.
Pre-action = safest for server rooms. Needs two independent triggers so accidental water release is nearly impossible. The slides explicitly call this out.

HVAC Standards (From Slides)

  • Temperature target: 68–75°F (20–24°C).
  • Humidity target: 45–55% relative humidity. Below 40% = static (ESD) risk. Above 60% = condensation and corrosion risk.
  • Monitoring alerts: SNMP (device monitoring), HTTP dashboards (web status), SMS (immediate out-of-range notification).
  • CRAC sizing formula: IT load (watts) × 1.3 = BTU rating required.

Sensors

  • PIR (Passive Infrared) — Detects changes in heat/infrared. Motion detection.
  • Electromechanical — Break in electrical circuit (door/window contacts). Simplest sensor type.
  • Photoelectric — Detects light changes (laser beam break, smoke).
  • Acoustical — Microphones detect sound (glass break, forced entry).
  • Wave Motion Detectors — Emit microwave or ultrasonic waves; detect changes in reflection caused by movement.
  • Capacitance Detectors — Emit and monitor a magnetic field around a protected object. Disturbance triggers alarm.
Social Engineering Attacks
PhishingMass email attack. Casts a wide net. Generic "verify your account" emails. Spear PhishingTargeted phishing. Uses personal details to appear legitimate. Much higher success rate. WhalingSpear phishing targeting executives (CFO, CEO). High-value target. Often requests wire transfers or credential disclosure. VishingVoice phishing. Phone call impersonating IT support, IRS, bank, etc. Shoulder SurfingObserving someone enter credentials. Physical proximity attack. Mitigate with privacy screens. TailgatingFollowing an authorized person through a secure door. Mitigate with mantrap and security awareness training. Dumpster DivingSearching trash for sensitive information. Mitigate with shredding and clean desk policy. ImpersonationPretending to be IT support, vendor, or authority figure to gain access or information.
Data Security Risks & Malware Types
Hardware FailureCauses downtime and denial of service. Mitigated by RAID, clustering, and redundant components. MalwareCan damage, steal, or encrypt data. Mitigated by AV/anti-malware, HIDS/HIPS, and user training. Data CorruptionCaused by improper shutdowns, hardware failure, or malware. Linux recovery: fsck /dev/sda1. Windows: chkdsk. Insider ThreatsHigh risk due to existing authorized access. Can be malicious or accidental. Mitigated by least privilege, auditing, and separation of duties. Theft / DLPPhysical or data theft leads to breaches. Data Loss Prevention (DLP) tools monitor and block unauthorized data exfiltration.

Malware Types Tested on Server+

Armored VirusUses obfuscation and anti-disassembly techniques to prevent analysis and detection. Designed to confuse reverse engineering tools. Companion VirusCreates a malicious companion executable with the same name as a legitimate program. When the user runs the real program name, the virus executes first. Macro VirusEmbedded in document files (Word, Excel). Executes when the document is opened and macros are enabled. Spreads through shared documents and email attachments.
Server Hardening

Risk Mitigation Controls

SIEMSecurity Information and Event Management. Centralizes log analysis and alerting across the environment. Correlates events from multiple sources to detect attacks. Two-Person IntegrityAlso called dual control or two-man rule. Requires two authorized individuals to be present and agree before a critical or sensitive action can be completed. Prevents single-person fraud or sabotage. Used for sensitive operations like key management and major configuration changes. Regulatory ConstraintsLegal requirements that mandate specific security controls (HIPAA, PCI DSS, SOX). Non-compliance = fines, loss of license, or legal liability.

OS Hardening

  • Disable unused services — every running service is a potential attack surface.
  • Close unneeded ports — firewall rules deny by default, permit by exception.
  • Install only required software — unnecessary software = unnecessary vulnerabilities.
  • Apply all OS and driver updates promptly.
  • Configure host firewall (iptables on Linux, Windows Firewall).
  • Implement network access control (NAC) — verify device posture before allowing network access.

Hardware Hardening

  • Disable unused physical ports (USB, serial, optical) — prevents rogue device insertion.
  • Set BIOS/UEFI password — prevents unauthorized configuration changes.
  • Set boot order: internal drive first, disable PXE boot if not needed — prevents booting from unauthorized media.
  • Enable Secure Boot — prevents unsigned OS from loading.

Host Security

  • HIDS — Host Intrusion Detection System. Monitors activity and alerts. Does NOT block. Good for forensics.
  • HIPS — Host Intrusion Prevention System. Monitors AND actively blocks. May cause false-positive disruptions.
  • Signature-Based Detection — Matches against known threat patterns. Fast, misses zero-days and novel attacks.
  • Anomaly-Based Detection — Compares to normal baseline. Catches unknown threats. Higher false-positive rate.
  • DLP — Data Loss Prevention. Monitors and blocks unauthorized data exfiltration (USB copy, email attachment, etc.).

Patching Process

  • Test — Apply patches to non-production systems first. Verify nothing breaks.
  • Deploy — Roll out to production using patch management tool (WSUS, SCCM, Ansible).
  • Change Management — Document every patch applied, when, and by whom.
  • Rollback plan — Always know how to reverse a patch if it causes issues.
CH 07

Securing Server Data & Network Access

Obj 3.1 / 3.3 / 3.6
Encryption
Symmetric EncryptionSame key encrypts and decrypts. Fast. Key distribution is the challenge. Examples: AES, 3DES. Asymmetric EncryptionPublic/private key pair. Public key encrypts, private key decrypts. Slower. Solves key distribution problem. Examples: RSA, ECC. PKIPublic Key Infrastructure. Framework for managing digital certificates and asymmetric keys. CA (Certificate Authority) signs certificates to prove identity. Data at Rest — EFSEncrypting File System. File-level Windows encryption. Enterprise editions only. User-transparent once configured. Keys tied to user certificate. Data at Rest — BitLockerFull-disk encryption for Windows. Requires TPM chip. Encrypts entire volume. Protects against physical theft. Data at Rest — TapeLTO tape drives support hardware encryption. Must have consistent key management. All drives must support encryption. LTO-4 and later support 256-bit AES. TPMTrusted Platform Module. Hardware chip that securely stores encryption keys. Used by BitLocker and Secure Boot. Keys never leave the TPM in plaintext. Data in Transit — TLSTransport Layer Security. Replaced SSL (SSL is deprecated and insecure). Encrypts data moving between client and server. HTTPS = HTTP over TLS. SSLSecure Sockets Layer. Predecessor to TLS. Known vulnerable (POODLE, BEAST attacks). Should NEVER be used. On exams: say TLS, not SSL, for modern secure transport.
EFS = file-level encryption (Windows enterprise). BitLocker = full disk encryption (uses TPM). For the exam: "file-level encryption on Windows enterprise" = EFS. "Full disk encryption on Windows" = BitLocker with TPM.
VPN & IPsec
VPNVirtual Private Network. Encrypted tunnel across an untrusted network (internet). Makes remote access as if local. PPTPPoint-to-Point Tunneling Protocol. Legacy VPN. Considered insecure. Uses port 1723. L2TPLayer 2 Tunneling Protocol. Often paired with IPsec for security (L2TP/IPsec). Port 1701. IPsecInternet Protocol Security. Suite of protocols for securing IP communications. Can authenticate, ensure integrity, and encrypt. AH (Auth Header)Provides authentication and integrity ONLY. No encryption. Data is protected from tampering but can be read. Protocol 51. ESP (Encapsulating Security Payload)Provides authentication, integrity, AND encryption. The complete package. Protocol 50. If a question mentions "encrypted" IPsec — the answer is ESP. SA (Security Association)Agreement between two peers defining which algorithms, keys, and settings to use. One SA per direction (two SAs per connection). SPI (Security Parameter Index)Database (table) on each device tracking all active SAs. Each device maintains its own SPI.
AH = auth + integrity (NO encrypt). ESP = auth + integrity + encrypt. "Provides authentication but not encryption" = AH. "Provides all three" = ESP. This exact question appears regularly.
Identity & Access Management
Active Directory (AD)Microsoft's directory service. Centralized authentication and authorization. Objects: Users, Groups, Computers, OUs (Organizational Units). Domain controllers host AD. OU (Organizational Unit)Container within AD to organize objects. Group Policy Objects (GPOs) can be applied to OUs. Mirrors organizational structure. Group Policy (GPO)Centralized configuration management for Windows. Applied at Site, Domain, or OU level. Controls security settings, software deployment, desktop configuration. RBACRole-Based Access Control. Permissions assigned to roles (job functions), not individuals. Users are assigned roles. Most common enterprise model. Easier to manage than per-user permissions. Rule-Based Access ControlAccess controlled by static rules (firewall ACLs, time-of-day restrictions). Not the same as RBAC despite similar name. DAC (Discretionary)Resource owner controls access permissions. Standard NTFS permissions — file owner decides who gets access. MAC (Mandatory)System enforces access based on security labels. Users cannot override. Used in government/military environments (SELinux, Orange Book). You cannot share a Top Secret file with a user with a Secret clearance even if you own it. Segregation of DutiesNo single person controls a complete critical process. Requires at least two people to complete sensitive transactions. Reduces fraud and insider threats. Principle of Least PrivilegeUsers and processes receive only the minimum permissions needed to perform their function. Nothing more. MFA FactorsSomething you KNOW (password, PIN, security question). Something you HAVE (smartcard, token, phone). Something you ARE (fingerprint, retina, voice, facial recognition). True MFA requires two different factor TYPES. SSOSingle Sign-On. Authenticate once, access multiple systems. Kerberos and SAML are common SSO protocols. Active Directory implements SSO for Windows domain environments.
Password + PIN = NOT MFA (both "something you know"). Smartcard + fingerprint = MFA ("have" + "are"). Password + security question = NOT MFA (both "know"). Know the three factor categories cold.
Decommissioning & Media Destruction
Soft WipeFile deletion or quick format. Data can be recovered with forensic tools. NOT secure. Hard Wipe (Sanitization)Overwrite all sectors with zeros, ones, or random data. DoD 5220.22-M standard: 7 passes. Secure for HDDs. NOT effective on SSDs (due to wear leveling). DegaussingStrong magnetic field destroys magnetic data patterns. Renders HDD unusable — not just wiped, physically damaged. Most secure for magnetic media. INEFFECTIVE on SSDs, flash, or optical media. ShreddingPhysical destruction into small fragments. Absolute certainty. Required for classified media in many government environments. Crushing/DrillingPhysical destruction. Piercing or crushing platters. Less thorough than shredding but faster and cheap. IncinerationBurning. Used for classified/sensitive media. Complete destruction. SSD SanitizationDegaussing and overwriting don't work reliably on SSDs due to wear leveling and over-provisioned cells. Best options: manufacturer secure erase command (ATA Secure Erase), encryption + key destruction, or physical destruction.
Degaussing = most secure for magnetic HDDs. Degaussing is INEFFECTIVE for SSDs, USB drives, or optical discs. For SSDs: physical destruction or cryptographic erasure (encrypt first, then destroy the key).
CH 08

Networking & Scripting

Obj 2.2 / 2.6
Key Port Numbers — Complete Reference
20/21File Transfer ProtocolFTP
22Secure Shell / SFTPSSH
23Telnet (insecure, legacy)Telnet
25Simple Mail Transfer ProtocolSMTP
53Domain Name ServiceDNS
67/68Dynamic Host Config ProtocolDHCP
80Hypertext Transfer ProtocolHTTP
110Post Office Protocol v3POP3
123Network Time ProtocolNTP
143Internet Message Access ProtocolIMAP
161/162SNMP (161=poll, 162=trap)SNMP
389/3268Lightweight Directory Access ProtocolLDAP
443HTTP Secure (HTTP over TLS)HTTPS
636/3269LDAP over SSL/TLSLDAPS
989/990FTP over TLS/SSLFTPS
1433Microsoft SQL ServerMSSQL
3306MySQL / MariaDBMySQL
3389Remote Desktop ProtocolRDP
5985/5986WinRM (PowerShell Remoting)WinRM
Network Configuration — Servers
Static IPManually assigned. Required for servers — you never want a server to change IP. DNS, DHCP, DC, and other infrastructure servers must be static. DHCP ReservationDHCP assigns same IP every time based on MAC address. Combines static predictability with DHCP management. Good for printers and some servers. VLANVirtual LAN. Logical segmentation of a switch at Layer 2. Traffic between VLANs requires routing (Layer 3). Reduces broadcast domains, improves security. VLAN Tagging (802.1Q)Ethernet frames carry a VLAN ID tag. Switch ports: Access (untagged, one VLAN) or Trunk (tagged, multiple VLANs). Switch SpoofingAttacker tricks switch into treating their port as a trunk. Gains access to all VLANs. Fix: Disable DTP (Dynamic Trunking Protocol) on access ports. Double TaggingVLAN hopping attack. Attacker sends packet with two VLAN tags. Outer tag matches native VLAN and is stripped, inner tag routes to target VLAN. Fix: don't use VLAN 1 as native VLAN; assign unused VLAN as native. Default GatewayRouter IP that handles traffic destined outside the local subnet. Must be configured on every server. Wrong gateway = no external connectivity.
DNS & DHCP
DHCP DORADiscover (client broadcasts) → Offer (server responds with IP offer) → Request (client accepts offer) → Acknowledge (server confirms). Full lease process. APIPAAutomatic Private IP Addressing. 169.254.x.x / 16. Self-assigned when DHCP fails. Device can only communicate with other APIPA addresses on same segment. Symptom of DHCP failure. IPv6 Link-LocalFE80::/10 prefix. IPv6 equivalent of APIPA. Always auto-configured on every IPv6 interface. Scope limited to local link. FQDNFully Qualified Domain Name. Complete address: hostname + domain + TLD. Example: mail.company.com. DNS resolves FQDN to IP. Hosts FileLocal override for DNS. C:\Windows\System32\drivers\etc\hosts (Windows). /etc/hosts (Linux). Checked before DNS query. Can be used for testing or override. Rogue DHCPUnauthorized DHCP server on network. Assigns wrong IP, gateway, or DNS — can redirect traffic for man-in-the-middle attack. Mitigate with DHCP snooping on managed switches.
Firewalls
Packet FilterStateless. Evaluates each packet independently against ACL rules (source/dest IP, port, protocol). Fast, simple, limited context. Stateful InspectionTracks connection state table. Understands established vs. new connections. More intelligent than packet filter. Can block response packets that don't match an established session. Application Layer (Proxy)Inspects payload content at Layer 7. Can block specific file types, URLs, commands. Slowest but most thorough. NGFW (Next-Gen Firewall)Combines stateful inspection + application awareness + IPS + user identity. Modern standard (Palo Alto, Fortinet, Cisco FTD). Dual-Homed FirewallDevice with two NICs connecting two networks (internal + external). First line of defense between LAN and internet. No routing between interfaces without firewall inspection. DMZDemilitarized Zone. Network segment between internal network and internet. Public-facing servers (web, email, DNS) go here. Isolated from internal LAN. iptablesLinux host-based firewall. Replaced ipchains. Rules evaluated in order, first match wins. nftables is the modern successor.
Scripting
LanguageExtensionPlatformComment SyntaxPrimary Use
Bash.shLinux/macOS/Unix# (not #! — that's shebang)System admin, automation, pipelines
PowerShell.ps1Windows (also Linux)#Windows admin, cmdlets, AD management
Batch.bat / .cmdWindows (legacy)REMLegacy Windows automation
VBScript.vbsWindows' (apostrophe)Legacy Windows scripting via WSH
Python.pyCross-platform#Cross-platform admin, automation

Key Scripting Concepts

Shebang (#!)First line of a Unix script. Tells the OS which interpreter to use. #!/bin/bash = use Bash. NOT a comment even though it starts with #. VariablesStore values. Bash: VAR=10 (no spaces). PowerShell: $var = 10. Reference with $VAR (Bash) or $var (PowerShell). Environment VariablesSystem: available to all users. User: available only to specific user when logged in. Program/Process: available only within that process. Loopsfor, while, do-while. Repeat operations. Essential for bulk tasks (process 1,000 users, rename 500 files). Conditionalsif/else/elif. Execute code based on conditions. Compare values with comparators (==, !=, -gt, -lt in Bash; -eq, -ne in PowerShell). IntegersWhole numbers. COUNT=5 StringsCharacter sequences in quotes. "Hello" or 'World' ArraysCollections of values. Bash: NAMES=("Alice" "Bob" "Connor"). Reference: ${NAMES[0]}
PowerShell = .ps1 (uses cmdlets like Get-Process, Set-Item). Bash = .sh (uses commands like grep, awk, sed). Batch = .bat (Windows legacy, uses commands like ECHO, COPY). The shebang line (#!) is NOT a comment — it specifies the interpreter.
CH 09

Disaster Recovery

Obj 3.7 / 3.8
Backup Types — Archive Bit Logic
TypeBacks UpArchive Bit AfterBackup SpeedRestore SpeedStorage Needed
FullEverything (ignores archive bit)ClearedSlowestFastest (1 set)Most
IncrementalChanged since last full OR incrementalClearedFastestSlowest (full + all incrementals)Least
DifferentialChanged since last FULL onlyNOT clearedMedium (grows)Medium (full + latest diff)Medium
Synthetic FullConstructed from prior full + incrementalsClearedMediumFast (1 set)Medium
SnapshotPoint-in-time copy (copy-on-write or split mirror)N/AVery fastVery fastVaries
Incremental clears the archive bit after each run (so next incremental only gets new changes). Differential does NOT clear the bit (so it keeps accumulating all changes since last full). Differential grows larger each day but restores faster. Incremental stays small but needs multiple sets to restore.
RTO & RPO
RTO — Recovery Time Objective
Maximum acceptable downtime after a disaster. "How long can we be offline?"

Low RTO = need fast recovery = hot site, clustered systems, automated failover.
RPO — Recovery Point Objective
Maximum acceptable data loss measured in time. "How much data can we lose?"

Low RPO = need frequent backups or synchronous replication. RPO of 1 hour = must backup at least every hour.
Backup Media Rotation
GFS (Grandfather-Father-Son)Son = daily. Father = weekly. Grandfather = monthly. Newest backup always goes to oldest media in rotation. Standard enterprise tape rotation scheme. FIFOFirst In, First Out. Simplest rotation. Reuse in order. Risk of tape degradation over time. 3-2-1 Rule3 copies of data, on 2 different media types, with 1 copy offsite. Industry best practice for backup strategy.
Backup Media Types
Magnetic TapeLTO (Linear Tape-Open). Current standard: LTO-9 (18TB native / 45TB compressed). LTO-10 specified at 36TB native. Color-coded. Cheapest per TB for archival. Slow sequential access. Libraries connect via SCSI, FC, or iSCSI. HDD / NASDisk-to-disk backup. Fast. More expensive per TB than tape. Good for short-term/recent backups. Enables fast restore. Cloud StorageOffsite by default. Scalable. Variable cost. Restore speed limited by bandwidth. Common for 3-2-1 "1 offsite" copy. OpticalDVD/Blu-ray. Long shelf life. Read-only once written (WORM). Limited capacity. Used for archival or compliance.
Disaster Recovery Sites
Site TypeEquipmentData CurrencyTime to ActivateCost
Hot SiteFully operational. All hardware, network, power.Real-time or near-real-time replicationMinutesHighest
Warm SiteHardware in place. Needs config and data restore.Must restore from recent backupHoursMedium
Cold SitePhysical space, power, cooling only. No equipment.Must procure hardware AND restore dataDays to weeksLowest
Hot site = fully crewed FOB ready to assume mission instantly. Warm site = staging area with vehicles and comms, but needs personnel and orders. Cold site = an empty warehouse with electricity — you have to build the FOB from scratch after the disaster. Cost goes up inversely with activation time.
Replication
SynchronousWrite must complete on BOTH primary and secondary before the write is confirmed to the application. Zero data loss. Higher latency. Used for hot sites and mission-critical systems. Distance limited by latency tolerance. AsynchronousWrite completes on primary, then replicates to secondary with a delay. Lower latency impact on primary. Some data loss possible (the replication lag = RPO). Used for warm sites and geographically distant replicas. BidirectionalBoth sites are source and target. Changes on either side replicate to the other. Used in active-active DR configurations. Host-Based ReplicationSoftware on the server handles replication (server-to-server). Less expensive than SAN-based. More CPU overhead on source server. SAN-Based ReplicationStorage array handles replication. Transparent to server. Higher performance. Most enterprise DR environments use this.
DR Testing Methods
Tabletop ExerciseDiscussion-based. No systems touched. Key personnel walk through a disaster scenario verbally. Identifies gaps in the plan. Most cost-effective. Start here. Simulated FailoverTest recovery procedures in a non-production environment. Systems not actually failed over. Safer than live failover. Live FailoverPrimary site is actually taken offline. Traffic fails over to DR site. Most realistic test. Highest risk — if DR site has problems, you have a real outage. Riskiest test. Production vs. Non-ProductionTesting recovery functions in a parallel non-production copy. Verifies the process without risking production. Backup ValidationRegularly restore from backup to verify backups are usable. A backup that can't restore is not a backup. Test restores on a schedule — monthly minimum for most environments.
CH 10

Troubleshooting Hardware & Software Issues

Obj 4.1 / 4.2 / 4.4
CompTIA Troubleshooting Methodology — 7 Steps
Identify the problem and scope. Question users. Identify recent changes. Collect logs. Replicate if possible. Back up before making changes. Escalate if necessary.
Establish a theory of probable cause. Question the obvious. Look for common elements across multiple symptoms. Start with simplest explanation.
Test the theory. If confirmed → determine next steps. If not confirmed → establish a new theory. Return to step 2.
Establish a plan of action. Determine how to fix it. Notify all impacted users before beginning.
Implement the solution or escalate. Make ONE change at a time. Test after each change. If change doesn't fix it, reverse it before trying the next thing.
Verify full functionality. Confirm the system works correctly. Implement preventive measures so the problem doesn't recur.
Document findings, actions, and outcomes. Record everything: what the problem was, what caused it, what fixed it, what was done preventively.
This is your SMEAC for tech problems. Always document — it's your after-action report. The next technician who hits this problem (or the next you) will thank you. Never make more than one change at a time or you won't know what fixed it.
Common Hardware Problems
SMART FailureSelf-Monitoring Analysis and Reporting Technology. Drive predicts its own failure. Check via wmic diskdrive get status (Windows). "OK" = healthy. SMART failure = replace drive soon. CMOS Battery FailureServer loses date/time when powered off. Settings revert to defaults. Replace CR2032 lithium cell. Cheap fix, easy symptom to miss. POST Beep CodesBIOS beep patterns during POST indicate which component failed. 1 long + 2 short = video card (common). Varies by BIOS manufacturer. Check documentation. BSOD (Windows)Kernel crash. First question: "What changed recently?" Unplug external devices, check drivers, look at stop code, review memory dump. Purple Screen (VMware)VMkernel crash on ESXi host. Equivalent of BSOD for VMware. Check ESXi logs and PSOD (Purple Screen of Death) error code. Kernel Panic (Linux)Linux kernel crash. System halts. Cannot safely recover. Often caused by hardware failure or bad driver/module. Memory LeakProcess allocates RAM and never releases it. Server slows over time. Fix: restart the leaking service. Long-term: patch or replace the software. Runaway ProcessProcess consuming excessive CPU or memory. Windows: Task Manager → End Task. Linux: top to identify, kill -9 [PID] to terminate. Thermal/OverheatingCheck fans, clean dust, verify baffles are in place. Symptoms: thermal throttling, random crashes, burning smell, high temperature alerts. Power Supply FaultRandom crashes, no POST, intermittent operation. Check PSU LEDs on server. Swap with known-good PSU if available.
OS & Software Troubleshooting
SFC (System File Checker)Windows: sfc /scannow. Validates and repairs corrupted system files. Run from elevated command prompt. TripwireFile integrity monitoring. Takes cryptographic hash of system files. Alerts if any file changes unexpectedly. Available for both Windows and Linux. Clock SkewTime difference between systems. Kerberos authentication fails if clock skew exceeds 5 minutes. Fix: sync all systems to NTP (port 123). Check CMOS battery if clock resets on power off. HCLHardware Compatibility List. If hardware isn't on the HCL for the OS, expect driver errors, instability, or refused installation. Check before purchasing. Buffer OverrunApplication writes more data to a buffer than it can hold, overwriting adjacent memory. Can cause crashes, data corruption, and code execution vulnerabilities. Prevent with bounds checking, ASLR, DEP. Version CompatibilityBackward compatibility = newer software works with older data/systems. Forward compatibility = older software works with newer data/systems. Check release notes before any upgrade — incompatibility causes unexpected failures. Service DependenciesServices that require other services to be running first. If a dependency fails, the dependent service fails. Windows: Services MMC shows dependencies. Linux: systemctl list-dependencies. Always review before stopping services to avoid cascading failures. Driver IncompatibilityWrong driver version for hardware or OS. Causes device failures, crashes, and errors. Fix: download correct driver from manufacturer. Roll back via Device Manager if update caused the problem. CPU AffinityBinding a process to a specific CPU core. Can improve cache performance in NUMA systems. Improperly set affinity restricts performance. Safe Mode (Windows)Boots with minimal drivers and services. Use to isolate startup problems, remove bad drivers, run malware scans. Single User Mode (Linux)Linux equivalent of Safe Mode. Root access, minimal services. Used for system repair and password recovery. Soft RebootGraceful OS-initiated shutdown and restart. Flushes buffers, closes open files, lets services stop cleanly. shutdown /r (Windows) or reboot (Linux). Always preferred. Hard RebootPhysical power cycle — hold power button or press reset. No graceful shutdown. Risk of file system corruption and data loss. Last resort when OS is completely unresponsive. WSUSWindows Server Update Services. Centralized patch management. Approves and deploys patches to Windows systems in the domain. RPMRed Hat Package Manager. Linux package management for RHEL/CentOS. rpm -i package.rpm to install.
Privilege Escalation & Configuration Management
runas (Windows)Run a program under a different user account without logging off. Used to execute admin tasks from a standard user session. Syntax: runas /user:domain\admin cmd. sudo (Linux)Superuser Do. Executes a single command with elevated (root) privileges. Logged to auth log. Preferred over su — limits scope of privilege and creates an audit trail. su (Linux)Switch User. su - switches to root with root's full environment. Grants a full root session — higher risk and less auditable than sudo. SCCMSystem Center Configuration Manager (Microsoft). Enterprise Windows patch management, software deployment, inventory, and compliance reporting. Deploy software and enforce settings at scale. Puppet / ChefAgent-based configuration management tools. Define desired system state in code. Agents on managed nodes continuously enforce configuration. Cross-platform (Windows and Linux). AnsibleAgentless configuration management. Uses SSH. Playbooks written in YAML. No agent required on managed nodes. Simpler to deploy than Puppet/Chef. Widely used for both Linux and Windows. GPOGroup Policy Object. Windows Active Directory. Applies configuration to users and computers in the domain. Controls security settings, software deployment, login scripts. Applied in LSDOU order: Local → Site → Domain → OU.
sudo runs ONE command as root and logs it. su grants a full root shell — riskier and less auditable. Always prefer sudo. Puppet/Chef = agent-based (pull model). Ansible = agentless (push via SSH). All three enforce desired configuration state and are specifically called out in the Server+ slides.
Visual & Auditory Diagnostic Cues
  • LED Indicators — #1: System ID (identify which server). #2: Safe to remove. #3: Service action required (fault). #4: Power/OK status.
  • LCD Panel — Some enterprise servers have front-panel LCD showing status codes, temperatures, and error messages.
  • Beep Codes — Different patterns indicate different POST failure types. Must check vendor documentation — not universal.
  • Burning Smell — Immediate hardware failure or overheating. Power down immediately.
  • Clicking/Grinding — HDD mechanical failure. Back up immediately. Replace drive.
CH 11

Network Connectivity & Security Issues

Obj 4.5 / 4.6
Network Troubleshooting Checklist
Check physical: link lights on NIC and switch port, cable integrity, correct cable type (straight-through vs crossover).
Verify IP config: ipconfig (Windows) or ip a (Linux). Check IP, subnet mask, default gateway, DNS server.
Ping loopback: ping 127.0.0.1. If this fails, TCP/IP stack is broken. Reinstall TCP/IP.
Ping local IP. Ping default gateway. Ping external IP. Ping external hostname. Isolate where connectivity breaks down.
If can ping IP but not hostname → DNS issue. Check DNS server config, hosts file, DHCP-assigned DNS server.
Trace the route: tracert (Windows) / traceroute (Linux). Identify which hop is dropping packets.
Common Errors & Root Causes
169.254.x.x addressAPIPA — DHCP failure. Device couldn't reach DHCP server. Check DHCP server, switch port, rogue DHCP. Destination Host UnreachableNo route to target. Cause: wrong default gateway, routing table issue, router down, ISP failure. Request Timed OutPacket sent but no response. Could be firewall blocking ICMP, host down, or packet loss on path. Unknown HostHostname cannot be resolved. DNS failure. Check DNS server, DHCP DNS assignment, hosts file. Can ping IP, not hostnameDNS issue. Network works, name resolution doesn't. Fix: correct DNS server address, fix DNS zone, update hosts file. Cannot reach remote subnetWrong subnet mask or wrong default gateway configured. Or router/routing table issue. Rogue DHCPUnauthorized DHCP server assigns wrong config. Fix: enable DHCP snooping on managed switches — only allows DHCP responses from trusted ports.
Network Diagnostic Commands
CommandOSPurpose
ipconfig /allWindowsShow full IP config, MAC address, DNS servers, DHCP lease info.
ip addr / ip aLinuxShow IP configuration on all interfaces.
pingBothTest ICMP connectivity to a host. -t flag (Windows) = continuous ping.
tracertWindowsTrace packet route hop by hop. Identifies where traffic fails.
tracerouteLinux/RouterLinux/router equivalent of tracert.
nslookupWindowsDNS query tool. Test name resolution and query specific DNS servers.
digLinuxPowerful DNS query tool. More detail than nslookup. Query specific records.
netstat -anBothShow all active connections and listening ports with addresses.
netstat -rBothDisplay routing table.
route printWindowsDisplay IPv4 routing table.
nbtstat -nWindowsShow NetBIOS names on local machine. Legacy but still tested.
telnet [host] [port]BothTest connectivity to a specific TCP port. Useful for firewall testing. Do not use for actual remote management — insecure.
nc (netcat)LinuxNetwork Swiss army knife. Test port connectivity, transfer files, port scan.
Security Troubleshooting
Misconfigured permissionsUsers accessing unauthorized resources or blocked from authorized ones. Audit ACLs, group memberships, NTFS permissions, share permissions. Firewall misconfigBlocking legitimate traffic or allowing unauthorized. Check rule order (rules evaluated top-down, first match wins). Test with telnet to specific ports. SELinux blockingCommon on RHEL/CentOS. Application won't start even with correct file permissions. Check audit log: /var/log/audit/audit.log. Set to permissive mode to test: setenforce 0. UAC blocking (Windows)User Account Control prompting or silently blocking. Run as administrator or adjust UAC policy via GPO. Group Policy conflictGPO overriding local settings. Run gpresult /r to see applied policies. Check which GPO is setting the conflicting value. Anti-malware false positiveSecurity tool blocking legitimate software. Whitelist or create exception for the application. Verify the detection is actually a false positive first.
Security Tools
Port ScannerIdentifies open ports and services on a host. Nmap is the standard. Use for auditing, not attack (in authorized environments). Packet SnifferCaptures and analyzes network traffic. Wireshark is the standard. Captures frames, decodes protocols, filters traffic. SIEMSecurity Information and Event Management. Aggregates logs from multiple sources, correlates events, generates alerts. Examples: Splunk, IBM QRadar, Microsoft Sentinel. Anti-malwareSignature-based (known threats) + heuristic/behavioral (unknown threats). Should run on every server. File Integrity CheckerTripwire, AIDE. Takes cryptographic hash of files. Detects unauthorized modifications. Good for detecting malware that modifies system files.
CH 12

Troubleshooting Storage Issues

Obj 4.3
Common Storage Problems
Drive Not Available / Can't MountCheck physical connection, backplane power, SAS/SATA cables, driver, filesystem type compatibility. Also check if drive shows in BIOS/UEFI even if OS can't see it. Data CorruptionLinux: fsck /dev/sda1 (must unmount first). Windows: chkdsk C: /f /r. Check for bad sectors. May indicate drive failure — run SMART diagnostics. Slow I/O PerformanceCould be: degraded RAID (rebuild in progress), failing drive, cache failure, network saturation (NAS), incorrect RAID configuration, fragmentation (HDD). Restore FailureBackup tape/file is corrupt or unreadable. This is why you test restores regularly. Check media, verify backup software, try alternate restore method. Cache Battery FailureRAID controller write cache disabled (unsafe to use without battery). Write performance drops dramatically. System may log warning. Replace cache battery on RAID controller. Array RebuildAfter drive replacement in a RAID array, the array must rebuild. Performance degrades during rebuild. RAID 5/6 vulnerable to additional failure during rebuild — this is the danger window. Corrupt Boot Sector / MBRServer won't boot. Boot from installation media, use repair tools. Windows: bootrec /fixmbr, bootrec /fixboot. Missing GRUB/LILOLinux bootloader gone or misconfigured. Boot from live media, chroot into the system, reinstall GRUB: grub-install /dev/sda. Mismatched Drives in RAIDAll drives in an array should match speed, capacity, and interface (architecture). Mismatched drives cause performance problems and may prevent array from functioning. Backplane FailureAll drives connected through the backplane become unavailable. Server may not see any drives in the affected bays. Replace backplane. Improper RAID ConfigurationWrong RAID level for use case, or misconfigured during setup (wrong stripe size, wrong disk order). May require full rebuild to correct.
Page/Swap File Performance Indicators
These Windows Performance Monitor counters indicate memory pressure causing excessive disk paging. If you see these thresholds exceeded, the server needs more RAM — not just disk tuning.
Free System Page Table Entries <5,000System nearly out of page table entries. Likely memory leak. Pool Nonpaged Bytes >175MBKernel nonpaged pool abnormally large. Indicates possible driver memory leak. Pool Paged Bytes >250MBKernel paged pool unusually large. Pages/Second >1,000Severe thrashing. System spending most of its time paging data in and out of RAM. Add RAM immediately.
Storage Diagnostic Commands
CommandOSPurpose
fsck /dev/sda1LinuxFilesystem check and repair. Must unmount first. -y flag auto-answers yes to repairs.
chkdsk C: /f /rWindowsCheck disk for errors (/f = fix) and bad sectors (/r = recover). Requires reboot if drive is in use.
badblocks /dev/sdaLinuxTest drive for bad sectors. Destructive write test option available. Slow but thorough.
smartctl -a /dev/sdaLinuxFull SMART attribute report for a drive. Shows health status, error counts, temperature, hours powered on.
wmic diskdrive get statusWindowsBasic SMART status check. "OK" = healthy. Anything else = investigate.
fdisk -lLinuxList all disks and their partition tables.
sudo fdisk /dev/sdaLinuxInteractive partition manager for a specific disk.
diskpartWindowsWindows CLI partition and volume management tool.
mount / umountLinuxMount or unmount a filesystem. mount -a mounts all entries in /etc/fstab.
net useWindowsMap or disconnect network drives (Windows equivalent of mount for network shares).
df -hLinuxShow disk space usage on all mounted filesystems in human-readable format.
du -sh /pathLinuxShow disk usage of a specific directory.
Linux Log File Reference
/var/log/messagesGeneral system and kernel messages. Start here for most problems. /var/log/syslogGeneral system log on Debian/Ubuntu systems (equivalent to /var/log/messages on RHEL). /var/log/auth.logAuthentication events: logins, sudo usage, failed login attempts. /var/log/kern.logKernel messages. Hardware errors, driver issues, kernel panics appear here. /var/log/boot.logBoot process messages. Services starting/failing during boot. /var/log/cron.logScheduled cron job execution and failures. /var/log/httpd/Apache web server access and error logs (RHEL/CentOS). /var/log/nginx/Nginx web server logs. /var/log/mysqld.logMySQL/MariaDB database server log. /var/log/secureSecurity and authentication log on RHEL/CentOS (equivalent to auth.log). /var/log/audit/audit.logSELinux audit log. Check here when SELinux is blocking something. /var/log/yum.logYUM package manager activity (RHEL/CentOS). journalctlsystemd's centralized log viewer on modern Linux. journalctl -xe = recent errors. journalctl -u nginx = logs for a specific service.
Logs are your SIGACT log. When something happens on a server, the evidence is in the logs. Know where to look and how to read them. On modern systemd Linux: journalctl -xe is your first stop — it shows recent errors across the entire system in one command.