WORDS 2022

The Third Workshop On Resource Disaggregation and Serverless Computing

November 17th 2022

San Diego, CA and Virtual

Recent hardware developments and application trends are challenging the long-standing server-centric data-center architecture. Two ideas have gained huge traction recently: resource disaggregation and serverless computing. Resource disaggregation breaks a computer server (either physically or virtually) into fine-grained, network-attached hardware resource units that can be shared by different applications. Serverless computing eschews "servers" by allowing users to directly deploy fine-grained programs (or "serverless functions") that are triggered by external events.


With the natural synergy and importance of these two topics, the 3rd Workshop on Resource Disaggregation and Serverless (WORDS'22) will bring together researchers and practitioners in hardware, software, networking, programming language, and application domains to engage in a lively discussion on a wide range of topics in the broad definition of resource disaggregation and serverless computing.

Program

Opening Remarks   8:40-8:50 PT - Arvind and Yiying

Session 1   8:50-9:30 PT (session chair - William Lin)

Keynote and Invited Industry Talk   9:30-11:00 PT (session chair - Ryan Kosta)

Abstract: Disaggregation refers to moving hardware resources outside the box. Looking at history, we learn that disaggregation succeeds based two key factors: burning issue and technical feasibility. We also learn that, once successful, disaggregation brings an impact that is much broader than originally envisioned. We look at memory disaggregation from this perspective; we make the case why memory disaggregation will finally happen now and discuss some of its potential surprise impact: fluid memory, instantaneous VM migration, and cheap Byzantine Fault Tolerance. The wildly different nature of these applications suggest that memory disaggregation will be a research topic of wide interest in the years to come.

Bio: Marcos K. Aguilera is a principal researcher at VMware. He previously worked in research at MSR Silicon Valley, HP Labs, and Compaq SRC. His technical interests span all aspects of distributed systems, including both theory and practice. He has served as program chair for many conferences including OSDI, SoCC, FAST, DISC, OPODIS, and ICDCN. Marcos received an MS and PhD in Computer Science from Cornell University, and a BE in Computer Science from Universidade Estadual de Campinas in Brazil. 



Abstract: The emerging CXL standard opens up a large design space of memory disaggregation systems at server, rack, row, and datacenter scale. The specific design chosen will largely affect the kinds of serverless platforms that can be built in future datacenters. At the same time, we expect that serverless computing will share datacenters with general-purpose cloud computing workloads for some kind to come. Thus, the design choices for memory disaggregation systems are largely determined based on the general-purpose computing use case. We describe CXL-based memory disaggregation for general-purpose cloud computing with a particular focus on deployability in terms of technological, performance, and cost constraints. We discuss a realistic design point that may serve as a hardware platform to built serverless computing platforms. 


Bio: Daniel is a Senior Researcher at Microsoft Azure Systems Research and an Affiliate Assistant Professor in Computer Science at the University of Washington. His research focuses on improving cloud efficiency, sustainability, and performance. From 2017 to 2019, Daniel taught at Carnegie Mellon University as Mark Stehlik Postdoctoral Fellow with research funded by Facebook. Daniel received his PhD in 2018 from the University of Kaiserslautern in Germany.

Session 2   11:30-12:30 PT (session chair - Zhiyuan Guo)

Lunch 12   12:30-1:30 PT

Invited Industry Talks   1:30-2:50 PT (session chair - Haoran Ma)

Abstract: Modern computing has become much more demanding in memory consumption. Such trend calls for more and better software for in memory data management. In this presentation, we will share the MemVerge experience on software defined big memory computing. We will discuss the motivation and the main features of our product with some interesting customer success stories.

Bio: Yue is a co-founder and the Chief Technology Officer of MemVerge. Previously, he worked as a senior post-doctoral scholar in memory systems at the California Institute of Technology. Yue has extensive research experience on both theoretical and experimental aspects of algorithms for non-volatile memories. His research has been published in top journals and conferences on data storage. Yue received his PhD in computer science from Texas A&M University, and his bachelor’s degree from Huazhong University of Science and Technology.

Abstract: In this talk, I will describe our experience and challenges with building a “true” serverless MLOps platform as a service on AWS. Our MLOps platform (Navigator) enables users to automate all stages of ML workflows from data import to deployed ML service in a matter of minutes. I will also delve into details of how we built the infrastructure to support serverless deep-learning ML workflows.

Bio: Swaminathan (Swami) Sundararaman is currently a research staff member at IBM Research working on designing and building next-generation storage systems. He was previously the CTO at Pyxeda.ai where he built a multi-cloud serverless MLOps as a service solution to support AI literacy for all. He holds a Ph.D. from the University of Wisconsin-Madison, Madison, and holds 20+ patents in Distributed Systems, Operating Systems, Non-Volatile Memory, and storage systems. Swami co-founded the OpML and HotEdge conferences that foster upcoming innovations in operationalizing ML/DL models in production and edge computing, respectively. He has served on the steering committee and program committees of multiple technical conferences.

Abstract: Function-as-a-Service (FaaS) has gained a lot of attention due to the simple programming model, transparent elasticity, and pay-as-you-go charging. FaaS achieves this by separating computation from data: the resulting stateless functions are easy for the platform to scale and manage. However, this comes with a performance hit for data-intensive applications, which always have to access data remotely. Part of the problem is that there is no way for applications to express computation-data affinity to the framework, even if the platform provides caching. In this talk I describe Palette Load Balancing, a simple abstraction that enables users to express affinity among function invocations and the data they use. Palette's "colors" are hints expressing that two or more functions should be run in the same instance. We implemented Palette on the open-source Azure Functions Host worker and show that we can recover most of the performance lost to lack of locality in a web application backend, and on a serverless implementation of Dask. I also briefly discuss possible advantages of locality when FaaS meets disaggregated memory.

Bio: Rodrigo is Principal Researcher at Microsoft and leads Azure Systems Research (AzSR) group. The group is focused on innovative systems research, broadly construed, to improve the efficiency and utility of the Azure cloud, while maintaining the security and reliability expected from a public cloud offering. He is broadly interested in cloud computing, operating systems, distributed systems, and networking (interestingly, this currently includes disaggregation and serverless computing!). He obtained his PhD from UC Berkeley, and prior to Microsoft was an Associate Professor at Brown University.

Session 3   2:50-3:30 PT (session chair - Yutong Huang)

Session 4   4:00-5:00 PT (session chair - Yifan Qiao)

Local Venue

Franklin Antonio Hall, UCSD 

3180 Voigt Dr, La Jolla, CA 92093

Call for Papers

We solicit three types of papers: position papers that explore new challenges and design spaces, short papers that describe completed or early-stage work, and abstracts that summarize works published in the past two years.

Topics of interest include but are not limited to:

Research and position paper submissions must be no longer than 5 pages including figures and tables, plus as many pages as needed for references. Abstracts of published works must be no longer than 2 pages, excluding references. Text should be formatted in two columns on 8.5x11-inch paper using 10-point Times-Roman font on 12-point (single-spaced) leading, 1-inch margins, and a 0.25-inch gutter (separation between the columns). New submissions will be double-blind. Abstracts of published works will be single-blind. Authors are allowed to post their papers on arXiv or other public forums.


We encourage researchers from all institutions to submit their work for review. Preliminary results of interesting ideas and work-in-progress are welcome. A paper accepted to WORDS would not preclude its future publication at a major conference. Submissions that are likely to generate vigorous discussion will be favored!

Registration

Registration is free for both in-person and online attendees! Please register online here. The event location shows online, but there will be two types of tickets when you click "Reserve a spot", one for online and one for in-person. Please choose the right type. Register for in-person attendance by Nov 14 4pm PT.


Organization Committee

Program Chairs

Program Committee

General Chair: Zachory Blanco

Local Chair: Zhiyuan Guo

Virtual Chairs: Chenxingyu (Frank) Zhao and Xiangfeng Zhu

Sponsors