초록 |
This work proposes a combined strategy of proactive and reactive scheduling where the proactive part is formulated as a stochastic dynamic program. The inherent computational complexity of stochastic dynamic optimization is addressed by approximate dynamic programming approach where the optimal value function is approximated in a recursive manner with Monte Carlo simulation and newly observed data. First, Markov decision process based on mixed integer linear programming for state-task-network is constructed. Instead of price optimization, we reformulate the problem that minimizes makespan with uncertainty including machine breakdown and demand. The decision epoch depends on operation availability of each equipment. The nearly-optimal value function constructed from approximate dynamic programming can deduce proactive optimal policy for stochastic scheduling and when unpredicted event occurs because such an event can be regarded as a outcome of state transition. Moreover, it is possible to produce an optimal policy in real time. This value function can also help to analyze the feasibility under current situation when unpredicted events occur. |