Learning Ceph a unified scalable and reliable open source storage solution 2nd Edition by Anthony D’Atri, Vaibhav Bhembre, Karan Singh – Ebook PDF Instant Download/Delivery: 1787127915 , 978-1787127913
Full download ELearning Ceph a unified scalable and reliable open source storage solution 2nd edition after payment

Product details:
ISBN 10: 1787127915
ISBN 13: 978-1787127913
Author: Anthony D’Atri, Vaibhav Bhembre, Karan Singh
Learning Ceph, Second Edition will give you all the skills you need to plan, deploy, and effectively manage your Ceph cluster. You will begin with the first module, where you will be introduced to Ceph use cases, its architecture, and core projects. In the next module, you will learn to set up a test cluster, using Ceph clusters and hardware selection. After you have learned to use Ceph clusters, the next module will teach you how to monitor cluster health, improve performance, and troubleshoot any issues that arise. In the last module, you will learn to integrate Ceph with other tools such as OpenStack, Glance, Manila, Swift, and Cinder.
By the end of the book you will have learned to use Ceph effectively for your data storage requirements.
Learning Ceph a unified scalable and reliable open source storage solution 2nd Table of contents:
- Reader feedback
- Customer support
- Downloading the example code
- Downloading the color images of this book
- Errata
- Piracy
- Questions
- Introducing Ceph Storage
- The history and evolution of Ceph
- Ceph releases
- New since the first edition
- The future of storage
- Ceph as the cloud storage solution
- Ceph is software-defined
- Ceph is a unified storage solution
- The next-generation architecture
- RAID: the end of an era
- Ceph Block Storage
- Ceph compared to other storage solutions
- GPFS
- iRODS
- HDFS
- Lustre
- Gluster
- Ceph
- Summary
- Ceph Components and Services
- Introduction
- Core components
- Reliable Autonomic Distributed Object Store (RADOS)
- MONs
- Object Storage Daemons (OSDs)
- Ceph manager
- RADOS GateWay (RGW)
- Admin host
- CephFS MetaData server (MDS)
- The community
- Core services
- RADOS Block Device (RBD)
- RADOS Gateway (RGW)
- CephFS
- Librados
- Summary
- Hardware and Network Selection
- Introduction
- Hardware selection criteria
- Corporate procurement policies
- Power requirements-amps, volts, and outlets
- Compatibility with management infrastructure
- Compatibility with physical infrastructure
- Configuring options for one-stop shopping
- Memory
- RAM capacity and speed
- Storage drives
- Storage drive capacity
- Storage drive form factor
- Storage drive durability and speed
- Storage drive type
- Number of storage drive bays per chassis
- Controllers
- Storage HBA / controller type
- Networking options
- Network versus serial versus KVM management
- Adapter slots
- Processors
- CPU socket count
- CPU model
- Emerging technologies
- Summary
- Planning Your Deployment
- Layout decisions
- Convergence: Wisdom or Hype?
- Planning Ceph component servers
- Rack strategy
- Server naming
- Architectural decisions
- Pool decisions
- Replication
- Erasure Coding
- Placement Group calculations
- OSD decisions
- Back end: FileStore or BlueStore?
- OSD device strategy
- Journals
- Filesystem
- Encryption
- Operating system decisions
- Kernel and operating system
- Ceph packages
- Operating system deployment
- Time synchronization
- Packages
- Networking decisions
- Summary
- Deploying a Virtual Sandbox Cluster
- Installing prerequisites for our Sandbox environment
- Bootstrapping our Ceph cluster
- Deploying our Ceph cluster
- Scaling our Ceph cluster
- Summary
- Operations and Maintenance
- Topology
- The 40,000 foot view
- Drilling down
- OSD dump
- OSD list
- OSD find
- CRUSH dump
- Pools
- Monitors
- CephFS
- Configuration
- Cluster naming and configuration
- The Ceph configuration file
- Admin sockets
- Injection
- Configuration management
- Scrubs
- Logs
- MON logs
- OSD logs
- Debug levels
- Common tasks
- Installation
- Ceph-deploy
- Flags
- Service management
- Systemd: the wave (tsunami?) of the future
- Upstart
- sysvinit
- Component failures
- Expansion
- Balancing
- Upgrades
- Working with remote hands
- Summary
- Monitoring Ceph
- Monitoring Ceph clusters
- Ceph cluster health
- Watching cluster events
- Utilizing your cluster
- OSD variance and fillage
- Cluster status
- Cluster authentication
- Monitoring Ceph MONs
- MON status
- MON quorum status
- Monitoring Ceph OSDs
- OSD tree lookup
- OSD statistics
- OSD CRUSH map
- Monitoring Ceph placement groups
- PG states
- Monitoring Ceph MDS
- Open source dashboards and tools
- Kraken
- Ceph-dash
- Decapod
- Rook
- Calamari
- Ceph-mgr
- Prometheus and Grafana
- Summary
- Ceph Architecture: Under the Hood
- Objects
- Accessing objects
- Placement groups
- Setting PGs on pools
- PG peering
- PG Up and Acting sets
- PG states
- CRUSH
- The CRUSH Hierarchy
- CRUSH Lookup
- Backfill, Recovery, and Rebalancing
- Customizing CRUSH
- Ceph pools
- Pool operations
- Creating and listing pools
- Ceph data flow
- Erasure coding
- Summary
- Storage Provisioning with Ceph
- Client Services
- Ceph Block Device (RADOS Block Device)
- Creating and Provisioning RADOS Block Devices
- Resizing RADOS Block Devices
- RADOS Block Device Snapshots
- RADOS Block Device Clones
- The Ceph Filesystem (CephFS)
- CephFS with Kernel Driver
- CephFS with the FUSE Driver
- Ceph Object Storage (RADOS Gateway)
- Configuration for the RGW Service
- Performing S3 Object Operations Using s3cmd
- Enabling the Swift API
- Performing Object Operations using the Swift API
- Summary
- Integrating Ceph with OpenStack
- Introduction to OpenStack
- Nova
- Glance
- Cinder
- Swift
- Ganesha / Manila
- Horizon
- Keystone
- The Best Choice for OpenStack storage
- Integrating Ceph and OpenStack
- Guest Operating System Presentation
- Virtual OpenStack Deployment
- Summary
- Performance and Stability Tuning
- Ceph performance overview
- Kernel settings
- pid_max
- kernel.threads-max, vm.max_map_count
- XFS filesystem settings
- Virtual memory settings
- Network settings
- Jumbo frames
- TCP and network core
- iptables and nf_conntrack
- Ceph settings
- max_open_files
- Recovery
- OSD and FileStore settings
- MON settings
- Client settings
- Benchmarking
- RADOS bench
- CBT
- FIO
- Fill volume, then random 1M writes for 96 hours, no read verification:
- Fill volume, then small block writes for 96 hours, no read verification:
- Fill volume, then 4k random writes for 96 hours, occasional read verification:
- Summary
People also search for Learning Ceph a unified scalable and reliable open source storage solution 2nd :
llearning ceph second edition
learn ceph storage
learning ceph pdf
a unified approach to coreset learning
a unified semi supervised learning codebase
Tags: Anthony D’Atri, Vaibhav Bhembre, Karan Singh, Learning Ceph, reliable open


