We consider a class of Markov Decision Processes frequently employed to model queueing and inventory control problems. For these problems, we explore how changes in different system input parameters (transition rates, costs, discount rates etc.) affect the optimal cost and the optimal policy when the state space of the problem is multidimensional. To address a large class of problems, we introduce two generic dynamic programming operators to model different types of controlled events. For these operators, we derive sufficient conditions to propagate monotonicity and supermodularity properties of the value function. These properties allow to predict how changes in system input parameters affect the optimal cost and policy. Finally, we explore the case when several parameters are changed at the same time.