Research
Trustworthy Deep Learning
Unlearning Information Bottleneck: Machine Unlearning of Systematic Patterns and Biases
Effective adaptation to distribution shifts in training data is pivotal for sustaining robustness in neural networks, especially when removing specific biases or outdated information—a process known as machine unlearning. Traditional approaches typically assume that data variations are random, which makes it difficult to adjust accurately the model parameters to remove patterns associated with the unlearned data. In this work, we present the Unlearning Information Bottleneck (UIB), a novel information-theoretic framework designed to enhance the process of machine unlearning that effectively leverages the influence of systematic patterns and biases for parameter adjustment. We propose a variational upper bound to recalibrate the model parameters through a dynamic prior that integrates changes in data distribution at an affordable computational cost, allowing efficient and accurate removal of outdated or unwanted data patterns and biases. Our experiments across various datasets, models, and unlearning methodologies demonstrate that our approach effectively removes systematic patterns and biases while maintaining the performance of models post-unlearning.
Statistical Learning
Statistical Estimation of Diffusion Network Topologies
Reconstructing the topology of a diffusion network based on observed diffusion results is an open challenge in data mining. Existing approaches mostly assume that the observed diffusion results are available and consist of not only the final infection statuses of nodes, but also the exact timestamps that pinpoint when infections occur. Nonetheless, the exact infection timestamps are often unavailable in practice, due to a high cost and uncertainties in the monitoring of node infections. In this work, we investigate the problem of how to infer the topology of a diffusion network from only the final infection statuses of nodes. To this end, we propose a new scoring criterion for diffusion network reconstruction, which is able to estimate the likelihood of potential topologies of the objective diffusion network based on infection status results with a relatively low statistical error. As the proposed scoring criterion is decomposable, our problem is transformed into finding for each node in the network a set of most probable parent nodes that maximizes the value of a local score. Furthermore, to eliminate redundant computations during the search of most probable parent nodes, we identify insignificant candidate parent nodes by checking whether their infections have negative or extremely low positive correlations with the infections of a corresponding child node, and exclude them from the search space. Extensive experiments on both synthetic and real-world networks are conducted, and the results verify the effectiveness and efficiency of our approach.
Online Learning
Online Scheduling of Distributed Machine Learning Jobs for Incentivizing Sharing in Multi-Tenant Systems
To save cost, companies usually train machine learning (ML) models on a shared multi-tenant system. In this cooperative environment, one of the fundamental challenges is how to distribute resources fairly among tenants such that each tenant is satisfied. A satisfactory allocation policy needs to meet the following properties. First, the performance of each tenant in the shared cluster is at least the same as that in its exclusive cluster partition. Second, no tenant can get more benefits by lying about its demands. Third, tenants cannot use the idle resources of others for free. Moreover, the resource allocation for ML workloads should avoid costly migration overhead. To this end, we propose a three-layer scheduling framework Astraea: i) a batch scheduling framework groups unprocessed ML jobs into multiple batches; ii) a round-by-round algorithm enables tenants to reserve their share of resources and schedule ML jobs in a non-preemptive manner; iii) one-round algorithm based on the primal-dual approach and posted pricing framework, which encourages tenants to report truthful demands and computes a schedule with maximal revenue for each ML job. Besides, our algorithm also allows tenants to trade unused resources on a paid basis. Astraea is proven to achieve performance guarantee, polynomial time complexity and some desirable properties of resource sharing, including batch sharing incentive, strategy- proofness, Pareto efficiency and gain-as-you-contribute fairness. Extensive trace-driven simulations show that our algorithm advances in both fairness and cluster efficiency by at least 20% compared to three state-of-the-art baselines.