Title: Computing the Kreiss Constant of a Matrix
Abstract: As famously introduced by H.-O. Kreiss about six decades ago, the now-called Kreiss constant of a square matrix bounds the magnitude of the transient behavior of the associated system of ordinary differential (or difference) equations. Although asymptomatically stable systems must eventually decay to a stable state, they nevertheless may incur surprisingly large transients before settling down, and such behavior can result in consequences ranging from undesirable to catastrophic. For example, large transients are innately power inefficient and may pose comfort or safety concerns, e.g., in transportation applications. Moreover, if the underlying physical system is nonlinear, large transient behavior in a stable linear model of that system may end up feeding nonlinear instabilities, which in turn could destabilize the actual physical process!
Although Kreiss constants theoretically allow us to robustly predict how severely a linear dynamical system will undergo transient behavior, until recently, no methods existed to actually compute them. Instead, Kreiss constants have historically only been approximated using crude and ad hoc techniques that may not provide even a single digit of accuracy. A sticking point toward better methods has been that the value of a Kreiss constant is formulated via certain global continuous optimization problems that often are nonconvex and have multiple local optimizers, and general nonconvex optimization is NP Hard. However, by taking advantage of special structure specific to our problem of interest, we present the very first algorithms to compute Kreiss constants to arbitrary accuracy. In fact, using two very different techniques, we give two different classes of algorithms to reliably compute Kreiss constants.
Title: Coding Techniques to Mitigate Stragglers in Large-Scale Distributed Computation
Abstract: Large-scale computation distributed across servers is often slowed down by stragglers—servers that delay or fail to complete tasks on time. In this talk, I will present two recent advancements in coding schemes to mitigate the impact of stragglers in distributed computing, focusing on matrix multiplication and gradient-based optimization. First, I will introduce a coding scheme for matrix multiplication that leverages subtask execution order to improve efficiency and recoverability. By probabilistically recovering incomplete tasks through coded ones, our method reduces encoding and decoding complexity while maintaining high recoverability. Next, I will present the ignore-straggler gradient coding (IS-GC), which tolerates an arbitrary number of stragglers in distributed machine learning. Using a graph-based decoding model, IS-GC maximizes gradient recovery and reduces training time.
Title:
Abstract:
Title:
Abstract: