High-performance computing (HPC) storage systems are a key component of the success of HPC to date. Recently, we have seen major developments in storage-related technologies, as well as changes to how HPC platforms are used, especially in relation to artificial intelligence and experimental data analysis workloads. These developments merit a revisit of HPC storage system architectural designs. In this paper we discuss the drivers, identify key challenges to status quo posed by these developments, and discuss directions future research might take to unlock the potential of new technologies for the breadth of HPC applications.
Rich user interfaces like Jupyter have the potential to make interacting with a supercomputer easier and more productive, consequently attracting new kinds of users and helping to expand the application of supercomputing to new science domains. For the scientist-user, the ideal rich user interface delivers a familiar, responsive, introspective, modular, and customizable platform upon which to build, run, capture, document, re-run, and share analysis workflows. From the provider or system administrator perspective, such a platform would also be easy to configure, deploy securely, update, customize, and support. Jupyter checks most if not all of these boxes. But from the perspective of leadership computing organizations that provide supercomputing power to users, such a platform should also make the unique features of a supercomputer center more accessible to users and more composable with high performance computing (HPC) workflows. Project Jupyter's core design philosophy of extensibility, abstraction, and agnostic deployment, has allowed HPC centers like NERSC to bring in advanced supercomputing capabilities that can extend the interactive notebook environment. This has enabled a rich scientific discovery platform, particularly for experimental facility data analysis and machine learning problems.
U.S. computing leaders, including the National Science Foundation, have partnered with universities, government agencies, and the private sector to accelerate research into responses to COVID-19 – providing an unprecedented collection of resources that include some of the fastest computers in the world. This current work expands on last month's Leadership Computing article by continuing to showcase the range of contributions that the national cyberinfrastructure is making to global efforts to stop the pandemic. This article touches on research efforts to learn how SARS-CoV-2 spreads among different populations, the biology and structure of the virus and its mechanisms of infection, and to develop effective vaccines for prevention and antiviral therapies for treatment. Even though we are still early in the process of developing an effective therapeutic response, the rapid mobilization of the national research cyberinfrastructure is a timely reminder of the strategic importance of robust, ongoing investments in large-scale scientific computing.