Recall that, by weak duality, any feasible point for this problem gives a valid upper bound on the primal objective . One such point is , . These values satisfy the constraints since for . This point has objective value , which recovers Markov's inequality.
We can also show that this inequality is tight (for example, one can find a solution to the primal problem that matches the bound).However, Markov's inequality is usually a pretty terrible bound for practical purposes. Just knowing the mean of a random variable isn't much information.
To get a better bound, we can incorporate additional primal information with this optimization-based framework. In fact, we can compute any bound of the form
for arbitrary functions , by simply solving a linear program. Similarly to in a previous post, we can extend these ideas to the continuous case with a little more care.
While a computational approach is not always appropriate, in many situations we can do better than the bounds given by the standard set of inequalities. Furthermore, computational bounds can scale to higher dimensions and more easily incorporate non-i.i.d. information.
Thank you to Kshitij Kulkarni for reviewing a draft of this post!