Ceph社区动态(2022-1-17~2022-2-16)

rosinL2022-02-23Ceph动态Pacific

Ceph社区动态(2021-1-17~2022-2-16)

Cephalocon 2022推迟举办

由于COVID-19大流行,原定于美国东部时间4月5-7日(北京时间4月6-8日)举办的Cephalocon 2022推迟举办,修订时间待定。本次Cephalocon 2022会议有关议题已经公布,整理如下,详情请参考Cephalocon 2022 sched

分类议题演讲者机构
RGW, PerformanceOptimizing RGW Object Storage Mixed Media through Storage Classes and Lua ScriptingCurt Bruns & Anthony D'AtriIntel
RGWRGW: Sync What? Sync Info Provider: Early PeekYehuda Sadeh-WeinraubRed Hat
RGWRGW – An Ultimate S3 Frontend for MultipleBackends: An Implementation StoryGregory Touretsky & Andriy TkachukSeagate
RGW, S3selectS3select: Computational Storage in S3Gal Salomon & Girjesh RajoriaRed Hat
RGWTesting S3 Implementations: RGW & BeyondRobin Hugh JohnsonDigitalOcean
RGWIntroduction to Container Object Storage Interface aka COSI for ceph RGWJiffin Tony ThottanRed Hat
RGWRGW ZipperDaniel Gryniewicz & Soumya KoduriRed Hat
CephadmLightning Talk: Introduction to CephadmMelissa LiRed Hat
DashboardDashboard: Exploring Centralized Logging with Ceph StorageGaurav Sitlani & Aashish SharmaRed Hat
DashboardOperating Ceph from the Ceph Dashboard: Past, Present and FutureErnesto PuertaRed Hat
Ceph, QoS, mClockCeph QoS Refinements for Background Operations using mClockSridhar SeshasayeeRed Hat
Ceph, PGpgremapper: CRUSHing Cluster OperationaComplexityJoshua BaergenDigitalOcean
Ceph, PGNew Workload Balancer in CephJosh Salomon & Laura FloresRed Hat
Ceph, DPDKLightning Talk: Ceph Messenger DPDkstack Development and DebuggingChunsong FengHuawei
Ceph, WindowsCeph on WindowsAlessandro PilottiCloudbase Solutions
SeastoreWhat's New with Crimson and Seastore?Samuel JustRed Hat
Seastore, CrimsonLightning Talk: Introduction to Crimson from a NewbieJoseph SawayaRed Hat
SeastoreUnderstanding SeaStore Through ProfilingYingxin Cheng & Tushar GohadIntel
BluestoreAccelerating PMEM Device Operations in BlueStore with Hardware Based Memory Offloading TechniqueZiye YangIntel
BluestoreRevealing BlueStore Corruption Bugs in Containerized Ceph ClustersSatoru TakeuchiCybozu
DevChasing Bad Checksums: A Journey through Ceph, TCMalloc, and the Linux kernelMauricio Faria de Oliveira & Dan HillCanonical
DevLightning Talk: Improving Ceph Build and Backport Automations Using Github ActionsDeepika UpadhyayRed Hat
DevCeph Crash Telemetry Observability in ActionYaarit HatukaRed Hat
PerformanceDisTRaC: Accelerating High-Performance Compute Processing for Temporary Data StorageGabryel Mason-WilliamsRosalind Franklin Institute
PerformancePutting the Compute in your StorageFederico Lucifredi & Brad HubbardRed Hat
PerformanceModifying Ceph for Better HPC PerformanceDarren SoothillCROIT
PerformanceOver A Billion Requests Served Per Day: Ensuring Everyone is Happy with Our Ceph Clusters’ PerformanceJane Zhu & Matthew LeonardBloomberg LP
PerformanceLessons Learned from Hardware Acceleration Initiatives for Ceph-specific WorkloadsHarry Richardson & Lionel CorbetSoftIron
PerformanceThe Effort to Exploit Modern SSDs on CephMyoungwon OhSamsung Electronics
PerformanceNVMe-over-Fabrics Support for CephJonas Pfefferle, IBM & Scott PetersonIntel
SecurityIntroducing the New RBD Image Encryption FeatureOr Ozeri & Danny HarnikIBM
SecurityCephFS At-Rest Encryption with fscryptJeffrey LaytonRed Hat
SecuritySecure Token Service in CephPritha SrivastavaRed Hat
SecurityData Security and Storage Hardening in Rook and CephFederico Lucifred & Michael HackettRed Hat
Ceph应用Improved Business Continuity for an Existing Large Scale Ceph Infrastructure: A Story from Practical ExperienceEnrico Bocch & Arthur Outhenin-ChalandreCERN
Ceph应用How we Operate Ceph at ScaleMatt VandermeulenDigital Ocean
Ceph应用BoF Session: Ceph in Scientific Computing and Large ClustersKevin HrpcekSpace Science & Engineering Center, University of Wisconsin - Madison
Ceph应用Aquarium: An Easy to Use Ceph ApplianceJoao Eduardo Luis & Alexandra SettleSUSE
Ceph应用Stretch Clusters in Ceph: Algorithms, Use Cases, and ImprovementsGregory FarnumRed Hat
Ceph应用We Added 6 Petabytes of Ceph Storage and No Clients Noticed! Here’s How We Did It.Joseph Mundackal & Matthew LeonardBloomberg LP
Ceph应用Why We Built A “Message-Driven Telemetry System At Scale” Ceph ClusterXiaolin Lin & Matthew LeonardBloomberg LP
Ceph应用Lightning Talk: Ceph and 6G: Are We Ready for zettabytes?Babar KhanTechnical University Darmstadt
Ceph应用Bringing emails@ceph Into the FieldDanny Al-GaafDeutsche Telekom AG
Ceph应用Lightning Talk: Ceph and QCOW2 a Match Made in Heaven: From Live Migration to Differential SnapshotsEffi OferIBM
Ceph应用Lightning Talk: Installing Ceph on Kubernetes Using the Rook Operator and HelmMike PetersenPlatform9
BenchmarkConnecting The Dots: Benchmarking Ceph at ScaleShon Paz & Ido PalRed Hat
BenchmarkIntroducing Sibench: A New Open Source Benchmarking Optimized for CephDanny AbukalamSoftIron

近期社区合入pr

近期pr主要以bug修复为主,摘选了部分如下:

  • mgr, 当osd in/out时,默认关闭pg recovery,需要时手动开启,以降低对服务集群的影响 pr#44588
  • osd, 增加ceph daemon perf dump中dump_blocked_ops_count选项,来快速获取blocked ops 个数,以解决原来的dump_blocked_ops操作带来的较大开销 pr#44780
  • rgw, rgw s3 CopyObject接口支持条件拷贝 pr#44678
  • rgw, 修复radosgw-admin bucket chown过程中大量内存使用的问题 pr#44357
  • rbd, krbd中引入rxbounce选项,解决image作为windows系统块设备时,crc校验出错和带来性能下降的问题 pr#44842

近期Ceph Developer动态

Ceph社区各个模块会定期举行会议,讨论和对齐开发进展,会后有视频上传至youtube,主要会议信息如下:

会议名称说明频率
Crimson SeaStore OSD Weekly MeetingCrimson & Seastore开发周例会
Ceph Orchestration MeetingCeph管理模块(Mgr)开发
Ceph DocUBetter Meeting文档优化双周
Ceph Performance MeetingCeph性能优化双周
Ceph Developer MonthlyCeph开发者月度例会
Ceph Testing Meeting版本验证及发布例会
Ceph Science User Group MeetingCeph科学计算领域会议不定期
Ceph Leadership Team MeetingCeph领导团队例会

近期社区重点投入Quincy版本的冻结测试与验证,提取关键内容:

  • Quincy测试,读性能符合预期,写性能部分场景有下降,但可以确定4k min_alloc_size,bluestore allocator优化有助于性能提升。
  • 随着omap规模增长,omap_iterator效率会导致大量的slow_ops甚至拒绝响应。issue记录了两种压缩方式测试结果:手动触发压缩,时延无法恢复至压缩前的水平;Rocksdb提供的周期性ttl压缩功能,使能后,时延可以恢复至压缩前水平。
  • CBT(Ceph Benchmark Tool)工具新增大量PR合入,重点关注大规模osd测试时对mem资源的控制,以及多client并发测试用例的测试。

【免责声明】本文仅代表作者本人观点,与本网站无关。本网站对文中陈述、观点判断保持中立,不对所包含内容的准确性、可靠性或完整性提供任何明示或暗示的保证。本文仅供读者参考,由此产生的所有法律责任均由读者本人承担。