and, in polynomial time, either returns a feasible solution, or finds a variable \(x_i\) that must be scaled up. The natural reformulation for 0/1 variables has the additional property that any variables that should be scaled up can instead be fixed to a bound. The trivial solution, \(x = 0\), is always feasible; however, whenever Chubanov's method returns a solution, that solution is strictly positive.
Peña and Soheili \cite{rescaling} recast the method as a variant of the Perceptron method. Soheili also related the Perceptron method to Lagrangian optimisation in her doctoral dissertation \cite{problems}.
I hope this alternative point of view on Chubanov's method, as a Lagrangian optimisation (dual ascent) method will prove useful for practical applications.

Projecting equality constraints away

Given a system of linear equalities and inequalities of the form