But the parent comment is not talking about constrained optimization, just gradient following.
In the context of this post, that’s just “which of these N discrete variables, if moved from 0 to 1, will increase the quantity of interest according to the linear model?” “Which will decrease it?”
The question is not, “if I can only set M of these N variables to 1, which should I choose?”
That’s a good question, and it leads to problems in NP, but that’s not what the comment was referring to.
> In the context of this post, that’s just “which of these N discrete variables, if moved from 0 to 1, will increase the quantity of interest according to the linear model?” “Which will decrease it?”
Yes, you are right in that abstract setting.
If you always have the full hypercube of available, the problem is as easy as you describe. But if there are constraints between the variables, it gets hairier.
Which seems almost ironic, because continuous linear optimization almost certainly doesn't exist really because real numbers can only be approximated, and so we're always doing discrete linear optimization at some level.
If all the numbers that appear in your constraints are rational (p/q with finite p and q), then any solution is also a rational number (with finite nominator and finite denominator).
(Well, any finite solution. Your solution could also be unbounded, then you might have infinities in there.)