Ceph Community News (2022-01-17 to 2022-02-16)

rosinL2022-02-23CephNewsPacific

Cephalocon 2022 Postponed

Due to the COVID-19 pandemic, Cephalocon 2022, originally scheduled for April 5-7 (April 6-8, Beijing Time), has been postponed. The new time is yet to be determined. The topics of the conference have been released and are listed below. For more details, see Cephalocon 2022 schedule.

CategoryTopicSpeakerInstitution
RGW, PerformanceOptimizing RGW Object Storage Mixed Media through Storage Classes and Lua ScriptingCurt Bruns & Anthony D'AtriIntel
RGWRGW: Sync What? Sync Info Provider: Early PeekYehuda Sadeh-WeinraubRed Hat
RGWRGW – An Ultimate S3 Frontend for MultipleBackends: An Implementation StoryGregory Touretsky & Andriy TkachukSeagate
RGW, S3selectS3select: Computational Storage in S3Gal Salomon & Girjesh RajoriaRed Hat
RGWTesting S3 Implementations: RGW & BeyondRobin Hugh JohnsonDigitalOcean
RGWIntroduction to Container Object Storage Interface aka COSI for ceph RGWJiffin Tony ThottanRed Hat
RGWRGW ZipperDaniel Gryniewicz & Soumya KoduriRed Hat
CephadmLightning Talk: Introduction to CephadmMelissa LiRed Hat
DashboardDashboard: Exploring Centralized Logging with Ceph StorageGaurav Sitlani & Aashish SharmaRed Hat
DashboardOperating Ceph from the Ceph Dashboard: Past, Present and FutureErnesto PuertaRed Hat
Ceph, QoS, mClockCeph QoS Refinements for Background Operations using mClockSridhar SeshasayeeRed Hat
Ceph, PGpgremapper: CRUSHing Cluster OperationaComplexityJoshua BaergenDigitalOcean
Ceph, PGNew Workload Balancer in CephJosh Salomon & Laura FloresRed Hat
Ceph, DPDKLightning Talk: Ceph Messenger DPDkstack Development and DebuggingChunsong FengHuawei
Ceph, WindowsCeph on WindowsAlessandro PilottiCloudbase Solutions
SeastoreWhat's New with Crimson and Seastore?Samuel JustRed Hat
Seastore, CrimsonLightning Talk: Introduction to Crimson from a NewbieJoseph SawayaRed Hat
SeastoreUnderstanding SeaStore Through ProfilingYingxin Cheng & Tushar GohadIntel
BluestoreAccelerating PMEM Device Operations in BlueStore with Hardware Based Memory Offloading TechniqueZiye YangIntel
BluestoreRevealing BlueStore Corruption Bugs in Containerized Ceph ClustersSatoru TakeuchiCybozu
DevChasing Bad Checksums: A Journey through Ceph, TCMalloc, and the Linux kernelMauricio Faria de Oliveira & Dan HillCanonical
DevLightning Talk: Improving Ceph Build and Backport Automations Using Github ActionsDeepika UpadhyayRed Hat
DevCeph Crash Telemetry Observability in ActionYaarit HatukaRed Hat
PerformanceDisTRaC: Accelerating High-Performance Compute Processing for Temporary Data StorageGabryel Mason-WilliamsRosalind Franklin Institute
PerformancePutting the Compute in your StorageFederico Lucifredi & Brad HubbardRed Hat
PerformanceModifying Ceph for Better HPC PerformanceDarren SoothillCROIT
PerformanceOver A Billion Requests Served Per Day: Ensuring Everyone is Happy with Our Ceph Clusters' PerformanceJane Zhu & Matthew LeonardBloomberg LP
PerformanceLessons Learned from Hardware Acceleration Initiatives for Ceph-specific WorkloadsHarry Richardson & Lionel CorbetSoftIron
PerformanceThe Effort to Exploit Modern SSDs on CephMyoungwon OhSamsung Electronics
PerformanceNVMe-over-Fabrics Support for CephJonas Pfefferle, IBM & Scott PetersonIntel
SecurityIntroducing the New RBD Image Encryption FeatureOr Ozeri & Danny HarnikIBM
SecurityCephFS At-Rest Encryption with fscryptJeffrey LaytonRed Hat
SecuritySecure Token Service in CephPritha SrivastavaRed Hat
SecurityData Security and Storage Hardening in Rook and CephFederico Lucifred & Michael HackettRed Hat
Ceph applicationImproved Business Continuity for an Existing Large Scale Ceph Infrastructure: A Story from Practical ExperienceEnrico Bocch & Arthur Outhenin-ChalandreCERN
Ceph applicationHow we Operate Ceph at ScaleMatt VandermeulenDigital Ocean
Ceph applicationBoF Session: Ceph in Scientific Computing and Large ClustersKevin HrpcekSpace Science & Engineering Center, University of Wisconsin - Madison
Ceph applicationAquarium: An Easy to Use Ceph ApplianceJoao Eduardo Luis & Alexandra SettleSUSE
Ceph applicationStretch Clusters in Ceph: Algorithms, Use Cases, and ImprovementsGregory FarnumRed Hat
Ceph applicationWe Added 6 Petabytes of Ceph Storage and No Clients Noticed! Here's How We Did It.Joseph Mundackal & Matthew LeonardBloomberg LP
Ceph applicationWhy We Built A "Message-Driven Telemetry System At Scale" Ceph ClusterXiaolin Lin & Matthew LeonardBloomberg LP
Ceph applicationLightning Talk: Ceph and 6G: Are We Ready for zettabytes?Babar KhanTechnical University Darmstadt
Ceph applicationBringing emails@ceph Into the FieldDanny Al-GaafDeutsche Telekom AG
Ceph applicationLightning Talk: Ceph and QCOW2 a Match Made in Heaven: From Live Migration to Differential SnapshotsEffi OferIBM
Ceph applicationLightning Talk: Installing Ceph on Kubernetes Using the Rook Operator and HelmMike PetersenPlatform9
BenchmarkConnecting The Dots: Benchmarking Ceph at ScaleShon Paz & Ido PalRed Hat
BenchmarkIntroducing Sibench: A New Open Source Benchmarking Optimized for CephDanny AbukalamSoftIron

Recently Merged PRs

Recently, PRs have mainly focused on bug fixing. The following describes notable changes:

  • mgr: Disabled the pg recovery by default when OSD is in or out. You can manually enable it as required to reduce the impact on the service cluster. pr#44588
  • osd: Added the dump_blocked_ops_count option to ceph daemon perf dump to quickly obtain the number of blocked ops, reducing the overhead caused by the dump_blocked_ops operation. pr#44780
  • rgw: Added support for the conditional copy by the rgw s3 CopyObject interface. pr#44678
  • rgw: Fixed the issue that a large amount of memory is used during the radosgw-admin bucket chown process. pr#44357
  • rbd: Introduced the rxbounce option to krbd to solve the problem of CRC errors and performance deterioration when images are used as block devices of the Windows system. pr#44842

Recent Ceph Developer News

Each module of the Ceph community holds regular meetings to discuss and align the development progress. Meeting videos are uploaded to YouTube. The major meetings are as follows:

MeetingDescriptionFrequency
Crimson SeaStore OSD Weekly MeetingCrimson & SeaStore developmentWeekly
Ceph Orchestration MeetingCeph management module (mgr) developmentWeekly
Ceph DocUBetter MeetingDocument optimizationBiweekly
Ceph Performance MeetingCeph performance optimizationBiweekly
Ceph Developer MonthlyCeph developersMonthly
Ceph Testing MeetingVersion verification and releaseMonthly
Ceph Science User Group MeetingCeph scientific computingIrregularly
Ceph Leadership Team MeetingCeph leadership teamWeekly

Recently, the community focuses on the freeze test and verification of the Quincy version. The following topics are discussed at the meetings:

  • In the Quincy version test, the read performance meets the expectation, but the write performance deteriorates in some scenarios. It can be determined that 4k min_alloc_size and bluestore allocator can improve the performance.
  • As the omap scale increases, the omap_iterator efficiency causes a large number of slow_ops or even no response. The issue records the test results of the two compaction modes. If the compaction is manually triggered, the latency cannot be restored to the previous level. To resolve this issue, Rocksdb provides periodic and TTL compaction. After this function is enabled, the latency can be restored to the previous level.
  • A large number of PRs are merged for the Ceph Benchmark Tool (CBT) with a focus on the control of memory resources during the large-scale OSD test and the multi-client concurrent test cases.

[Disclaimer] This article only represents the author's opinions, and is irrelevant to this website. This website is neutral in terms of the statements and opinions in this article, and does not provide any express or implied warranty of accuracy, reliability, or completeness of the contents contained therein. This article is for readers' reference only, and all legal responsibilities arising therefrom are borne by the reader himself.