Skip to Content

Ceph Storage: Unified Storage for your all needs

CEPH Storage For your Commodity Hardware
March 28, 2026 by
Tinihub Inc.


This blog focuses on understanding how Ceph works from an architectural standpoint. While other storage systems (like LustreFS) might be faster in raw throughput, Ceph’s CRUSH algorithm provides unmatched, software-defined resilience and scalability. It's the "brain" of the storage cluster.

Why Ceph is Different: The End of the Metadata Bottleneck

Traditional file systems rely on a static, centralized metadata server (MDS) to keep track of every file and its location. While systems like LustreFS decouple this extremely effectively, a bottleneck still exists at the MDS. Ceph’s core innovation is CRUSH (Controlled Replication Under Scalable Hashing).

CRUSH is not a physical server; it is a mathematical algorithm. When a client wants to write data, Ceph calculates a "placement group" (PG) and tells the client exactly which physical servers (OSDs) should store the data copies. This completely eliminates the need for a metadata lookup for data access. Every component in the cluster—monitors, managers, and clients—all know the CRUSH map, enabling true peer-to-peer data distribution and parallel access.

Core Architectural Components Illustrated:

  1. The Brain: Monitors (MONs) & Managers (MGRs): These components form the management layer, maintaining cluster maps, health, and reporting performance analytics.

  2. The Brawn: Object Storage Daemons (OSDs): Every physical disk (HDD or NVMe) has an OSD daemon managing it. Ceph is software-defined; it does not rely on hardware RAID. The OSDs perform data recovery and replication.

  3. The File Layer: CephFS & Metadata (MDS): While the block (RBD) and object (RGW) layers don't need metadata servers, the Ceph File System (CephFS) does. Unlike traditional systems, multiple MDS daemons can be active, and Ceph shards the file system tree across them dynamically as the workload changes.

This dynamic sharding is a massive advantage over systems with a single active MDS, allowing CephFS to scale namespace operations in lockstep with data storage.

Tinihub Inc. March 28, 2026
Share this post
Tags
Archive