OSiRIS is a pilot project funded by the NSF to evaluate a software-defined storage infrastructure for our primary Michigan research universities. OSiRIS will combine a number of innovative concepts to provide a distributed, multi-institutional storage infrastructure that will allow researchers at any of our three campuses to read, write, manage and share their data directly from their computing facility locations.
Our goal is to provide transparent, high-performance access to the same storage infrastructure from well-connected locations on any of our campuses. We intend to enable this via a combination of network discovery, monitoring and management tools and through the creative use of CEPH features.
By providing a single data infrastructure that supports computational access on the data “in-place”, we can meet many of the data-intensive and collaboration challenges faced by our research communities and enable these communities to easily undertake research collaborations beyond the border of their own Universities.
From addressing climate change to developing drug treatments, data is key to finding solutions to many global problems. Thanks to a National Science Foundation grant of nearly half a million dollars, MSU researchers will soon be able to easily share huge volumes of data with peers at institutions around the world, through the creation of a Science DMZ. A demilitarized zone, or DMZ as it is known in cyberinfrastructure, is a portion of the network designed to optimize high-performance for research applications. MSU IT will fund construction and maintenance costs exceeding the NSF award amount and serve as co-principal investigators on the grant with the Institute for Cyber-Enabled Research, or ICER. You can read more about the grant here
The OSiRIS team updated the ceph cluster from Nautilus 14.2.9 to Octopus 15.2.4 which is the latest release of Ceph as of August 2020 and it is the fourth release of the Ceph Octopus stable release series.
Major Changes from Nautilus
OSiRIS intalled a new third Power Distribution Unit [PDU] in rack 16EB to balance the load after the installation of 11 new servers at UM last year.
OSiRIS expanded our storage this year with the installation of 33 new nodes across the three core storage sites at U-M, WSU, and MSU. Each site is deploying 11 new nodes for a total of about 6PB of new capacity.
In prior years we have focused more on storage density per-node as our most cost effective path to maximizing available space. Though we have had success with these high density nodes (~600 TB per system) the low node count also has implications for performance, replication times, and potential pool configurations when using erasure coding. For this year we took a different approach and bought a higher count of nodes with less storage per node.
The International Conference for High Performance Computing, Networking, Storage, and Analysis: November 17–22, Colorado Convention Center, Denver, CO
Members of the OSiRIS team traveled to SC19 to deploy a pod of equipment in the booth for OSiRIS and SLATE demos. We gained valuable experience and data on Ceph cache tiering as well as a new ONIE-based switch running SONiC OS.