# Another perspective: Optimization with linear constraints (penalty method) comments appreciated!

• This problem can be re-translated to an optimization problem with linear constraints.

Let 2D vector fi = (fi, 0, fi, 1) be the frequency of 0's and 1's of the `i`th string `strs[i-1]`, 2XN matrix F = [f0, ... ,fN-1], then the problem is equivalent to

• maximum inner product function f(x) = e.x with constraints x ≥ 0 and b-Fx ≥ 0, where

e=(1,1,...,1) is a constant N-d vector, b = (m, n) is given, and "" means component-wise greater or equal.

This is a standard optimization problem with linear constraints. If you are familiar with penalty method, it can be converted to an optimization problem with no constraint:

• maximize function F(x) = f(x) - r ||min(0, [x, b-Fx])||2,

where "min( , )" means component-wise minimum, and r > 0 is the penalty factor. Note that the quadratic penalty function p(x) = min(0, x)2 is differentiable and one could try for an initial minimum point guess x0 and check gradient dF(x)/dx to look for minimum point. However, for N dimension problem, this seems to be very costly.

Any standard solver in practice for N dimensional optimization problem?

Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.