In networks consisting of agents communicating with a central coordinator and working together to solve a global optimization problem in a distributed manner, the agents are often required to solve private proximal minimization subproblems. Such a setting often requires a decomposition method to solve the global distributed problem, resulting in extensive communication overhead. In networks where communication is expensive, it is crucial to reduce the communication overhead of the distributed optimization scheme. Gaussian processes (GPs) are effective at learning the agents’ local proximal operators, thereby reducing the communication between the agents and the coordinator. We propose combining this learning method with adaptive uniform quantization for a hybrid approach that can achieve further communication reduction. In our approach, the GP algorithm is modified to account for the introduced quantization noise statistics due to data quantization. We further improve our approach by introducing an orthogonalization process to the quantizer’s input to address the inherent correlation of the input components. We also use dithering to ensure uncorrelation between the quantizer’s introduced noise and its input. We propose multiple measures to quantify the trade-off between the communication cost reduction and the optimization solution’s accuracy/optimality. Under such metrics, our proposed algorithms can achieve significant communication reduction for distributed optimization with acceptable accuracy, even at low quantization resolutions. This result is demonstrated by simulations of a distributed sharing problem with quadratic cost functions for the agents.
In distributed optimization schemes consisting of a group of agents connected to a central coordinator, the optimization algorithm often involves the agents solving private local proximal minimization sub-problems and exchanging data frequently with the coordinator to solve the global distributed problem. Such schemes usually cause excessive communication costs to the system, leading to the need for communication reduction for scenarios where communication is costly. The integration of Gaussian Processes (GPs) as a learning component to the Alternating Direction Method of Multipliers (ADMM hase been proven successful in learning each agent’s local proximal operators. Consequently, such a learning technique is effective to reduce the communication between the agents and the coordinator. A key element for the integration of GP in the ADMM algorithm is the mechanism upon which the coordinator decides when communication with an agent is required. The decision strategy used in this setting strongly affects the overall communication expenditure reduction and the ADMM’s algorithm performance. For that reason, we construct a general framework presenting a systematic querying mechanism as an optimization problem that balances the ideas ofmaximizinge the communication cost reduction while minimizing the prediction error. Motivated by such a framework, we propose different query strategies using the uncertainty measurement given by GP, to determine if the prediction is reliable enough to skip a communication round. We propose a joint query strategy following a simplification of the general framework that minimizesana L1 norm communication cost constraint by the trace of the joint uncertainty of the ADMM variables which is calculated using all agents’ prediction uncertainty. Additionally, we derive three different decision mechanisms (motivated by a confidence interval analysis) that make their decision by analyzing an uncertainty measurement for each agent individually. We integrate multiple measures to quantify the trade-off between the communication cost reduction and the optimization solution’s accuracy/optimality. The proposed methods can achieve significant communication reduction and good optimization solution accuracy for distributed optimization, as demonstrated by extensive simulations of a distributed sharing problem with quadratic cost functions for the agents.