Dynamic scheduling for flexible job shop with new job insertions by deep reinforcement learning
In modern manufacturing industry, dynamic scheduling methods are urgently needed with the sharp increase of uncertainty and complexity in production process. To this end, this paper addresses the dynamic flexible job shop scheduling problem (DFJSP) under new job insertions aiming at minimizing the t...
Saved in:
Published in | Applied soft computing Vol. 91; p. 106208 |
---|---|
Main Author | |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.06.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In modern manufacturing industry, dynamic scheduling methods are urgently needed with the sharp increase of uncertainty and complexity in production process. To this end, this paper addresses the dynamic flexible job shop scheduling problem (DFJSP) under new job insertions aiming at minimizing the total tardiness. Without lose of generality, the DFJSP can be modeled as a Markov decision process (MDP) where an intelligent agent should successively determine which operation to process next and which machine to assign it on according to the production status of current decision point, making it particularly feasible to be solved by reinforcement learning (RL) methods. In order to cope with continuous production states and learn the most suitable action (i.e. dispatching rule) at each rescheduling point, a deep Q-network (DQN) is developed to address this problem. Six composite dispatching rules are proposed to simultaneously select an operation and assign it on a feasible machine every time an operation is completed or a new job arrives. Seven generic state features are extracted to represent the production status at a rescheduling point. By taking the continuous state features as input to the DQN, the state–action value (Q-value) of each dispatching rule can be obtained. The proposed DQN is trained using deep Q-learning (DQL) enhanced by two improvements namely double DQN and soft target weight update. Moreover, a “softmax” action selection policy is utilized in real implementation of the trained DQN so as to promote the rules with higher Q-values while maintaining the policy entropy. Numerical experiments are conducted on a large number of instances with different production configurations. The results have confirmed both the superiority and generality of DQN compared to each composite rule, other well-known dispatching rules as well as the stand Q-learning-based agent.
•A deep Q-network (DQN) is proposed to select the appropriate dispatching rules.•Seven generic state features are extracted to represent the production status.•Six composite dispatching rules are designed to minimize the total tardiness.•The DQN is trained by deep reinforcement learning combined with two improvements.•Numerical experiments have verified the effectiveness and generality of the DQN. |
---|---|
AbstractList | In modern manufacturing industry, dynamic scheduling methods are urgently needed with the sharp increase of uncertainty and complexity in production process. To this end, this paper addresses the dynamic flexible job shop scheduling problem (DFJSP) under new job insertions aiming at minimizing the total tardiness. Without lose of generality, the DFJSP can be modeled as a Markov decision process (MDP) where an intelligent agent should successively determine which operation to process next and which machine to assign it on according to the production status of current decision point, making it particularly feasible to be solved by reinforcement learning (RL) methods. In order to cope with continuous production states and learn the most suitable action (i.e. dispatching rule) at each rescheduling point, a deep Q-network (DQN) is developed to address this problem. Six composite dispatching rules are proposed to simultaneously select an operation and assign it on a feasible machine every time an operation is completed or a new job arrives. Seven generic state features are extracted to represent the production status at a rescheduling point. By taking the continuous state features as input to the DQN, the state–action value (Q-value) of each dispatching rule can be obtained. The proposed DQN is trained using deep Q-learning (DQL) enhanced by two improvements namely double DQN and soft target weight update. Moreover, a “softmax” action selection policy is utilized in real implementation of the trained DQN so as to promote the rules with higher Q-values while maintaining the policy entropy. Numerical experiments are conducted on a large number of instances with different production configurations. The results have confirmed both the superiority and generality of DQN compared to each composite rule, other well-known dispatching rules as well as the stand Q-learning-based agent.
•A deep Q-network (DQN) is proposed to select the appropriate dispatching rules.•Seven generic state features are extracted to represent the production status.•Six composite dispatching rules are designed to minimize the total tardiness.•The DQN is trained by deep reinforcement learning combined with two improvements.•Numerical experiments have verified the effectiveness and generality of the DQN. |
ArticleNumber | 106208 |
Author | Luo, Shu |
Author_xml | – sequence: 1 givenname: Shu orcidid: 0000-0002-8251-865X surname: Luo fullname: Luo, Shu email: luos17@mails.tsinghua.edu.cn organization: National Engineering Research Center for Computer Integrated Manufacturing Systems, Department of Automation, Tsinghua University, Beijing 100084, China |
BookMark | eNp9kM9OAyEQh4mpiW31BTzxAluBZVlIvJj6N2niRc-EhcHSbNkGVmvf3l3ryUNPM5nM98vMN0OT2EVA6JqSBSVU3GwWJnd2wQgbB4IReYamVNasUELSydBXQhZccXGBZjlvyAApJqdI3x-i2QaLs12D-2xD_MC-S9i38B2aFvCma3Bedzu8D_0aR9j_TkLMkPrQxYybA3YAO5wgxIG0sIXY4xZMikPYJTr3ps1w9Vfn6P3x4W35XKxen16Wd6vCloT0RW2tgopAxUvLpRKmrB0QpjiXrq4pr6FivmrKWpSKeuMtHVa8IKWjjlvlyzmSx1ybupwTeG1Db8YL-2RCqynRoyi90aMoPYrSR1EDyv6huxS2Jh1OQ7dHCIanvgIknW2AaMGFBLbXrgun8B8pl4Uz |
CitedBy_id | crossref_primary_10_1007_s42979_020_00326_5 crossref_primary_10_1016_j_eswa_2024_123970 crossref_primary_10_1007_s00607_022_01078_1 crossref_primary_10_1007_s10462_024_11059_9 crossref_primary_10_1016_j_asoc_2024_112461 crossref_primary_10_1016_j_aei_2025_103216 crossref_primary_10_1109_TNNLS_2022_3208942 crossref_primary_10_1007_s00500_021_06053_0 crossref_primary_10_1016_j_aei_2024_102872 crossref_primary_10_3390_pr10040760 crossref_primary_10_1016_j_cirpj_2022_12_001 crossref_primary_10_1016_j_engappai_2024_109488 crossref_primary_10_3390_su14095340 crossref_primary_10_1016_j_aei_2023_102230 crossref_primary_10_3390_pr12010051 crossref_primary_10_1016_j_eswa_2025_127251 crossref_primary_10_1109_TII_2023_3281661 crossref_primary_10_3390_app14125208 crossref_primary_10_1016_j_jmsy_2024_03_002 crossref_primary_10_1051_e3sconf_202451201010 crossref_primary_10_1016_j_cor_2022_106095 crossref_primary_10_1016_j_asoc_2022_109717 crossref_primary_10_3390_su15108262 crossref_primary_10_1007_s00170_021_08290_x crossref_primary_10_1007_s10696_024_09574_6 crossref_primary_10_1007_s00521_023_08877_3 crossref_primary_10_3390_jmse11050892 crossref_primary_10_1080_21680566_2023_2248400 crossref_primary_10_1016_j_aei_2024_102646 crossref_primary_10_1109_TSMC_2024_3520381 crossref_primary_10_1016_j_asoc_2024_111234 crossref_primary_10_3390_app14177435 crossref_primary_10_1002_int_23090 crossref_primary_10_1109_TCYB_2022_3151855 crossref_primary_10_3233_JIFS_236981 crossref_primary_10_1016_j_omega_2022_102643 crossref_primary_10_1109_TITS_2024_3412932 crossref_primary_10_1016_j_cie_2024_110855 crossref_primary_10_1109_ACCESS_2023_3305951 crossref_primary_10_1016_j_cie_2021_107489 crossref_primary_10_1007_s10696_024_09587_1 crossref_primary_10_1016_j_jclepro_2022_130419 crossref_primary_10_1016_j_engappai_2024_108841 crossref_primary_10_3390_a17080343 crossref_primary_10_3390_buildings15050697 crossref_primary_10_1016_j_jmsy_2022_04_019 crossref_primary_10_1016_j_rcim_2021_102261 crossref_primary_10_1109_JIOT_2024_3485748 crossref_primary_10_3390_machines10030210 crossref_primary_10_1016_j_cie_2025_111062 crossref_primary_10_1016_j_swevo_2024_101619 crossref_primary_10_1007_s10479_022_05134_z crossref_primary_10_1016_j_procir_2022_09_024 crossref_primary_10_1016_j_autcon_2021_103737 crossref_primary_10_1109_ACCESS_2024_3457429 crossref_primary_10_1007_s10845_023_02161_w crossref_primary_10_1177_09544054241272855 crossref_primary_10_1109_ACCESS_2024_3448260 crossref_primary_10_1016_j_cie_2024_109995 crossref_primary_10_1016_j_eswa_2022_117380 crossref_primary_10_1016_j_aei_2024_102748 crossref_primary_10_1007_s10489_023_04479_7 crossref_primary_10_1007_s00521_024_09513_4 crossref_primary_10_1109_TSMC_2023_3289322 crossref_primary_10_3390_pr11123434 crossref_primary_10_1016_j_jmsy_2024_03_012 crossref_primary_10_3233_JIFS_233337 crossref_primary_10_1016_j_asoc_2024_111699 crossref_primary_10_1109_MCI_2024_3363970 crossref_primary_10_1016_j_swevo_2024_101605 crossref_primary_10_1016_j_swevo_2025_101901 crossref_primary_10_1080_00207543_2021_1943037 crossref_primary_10_1016_j_swevo_2025_101902 crossref_primary_10_1038_s41598_023_28630_z crossref_primary_10_1016_j_jmsy_2023_01_008 crossref_primary_10_1016_j_swevo_2024_101602 crossref_primary_10_1016_j_jmsy_2023_01_004 crossref_primary_10_1093_jcde_qwac044 crossref_primary_10_1016_j_caeai_2023_100181 crossref_primary_10_1109_ACCESS_2022_3188765 crossref_primary_10_1016_j_cie_2024_110359 crossref_primary_10_1109_TASE_2023_3271666 crossref_primary_10_1016_j_engappai_2021_104307 crossref_primary_10_1016_j_energy_2022_126034 crossref_primary_10_1016_j_cirpj_2022_11_003 crossref_primary_10_1016_j_engappai_2024_109557 crossref_primary_10_1016_j_rcim_2021_102283 crossref_primary_10_1080_09544828_2025_2450759 crossref_primary_10_1080_23080477_2024_2345951 crossref_primary_10_3390_math13010004 crossref_primary_10_1016_j_asoc_2023_110596 crossref_primary_10_1016_j_rcim_2022_102324 crossref_primary_10_1016_j_eswa_2023_123019 crossref_primary_10_3390_machines12010008 crossref_primary_10_1016_j_jclepro_2023_139249 crossref_primary_10_1016_j_cie_2023_109650 crossref_primary_10_1016_j_knosys_2022_108489 crossref_primary_10_1109_LRA_2022_3184795 crossref_primary_10_1016_j_procir_2024_08_027 crossref_primary_10_1109_TETCI_2022_3145706 crossref_primary_10_1080_00207543_2021_1968526 crossref_primary_10_1016_j_engappai_2024_108699 crossref_primary_10_1016_j_procir_2021_09_089 crossref_primary_10_3390_pr11072018 crossref_primary_10_1177_09544054231167086 crossref_primary_10_1016_j_procir_2022_05_117 crossref_primary_10_1016_j_engappai_2023_107790 crossref_primary_10_1109_TCYB_2022_3169210 crossref_primary_10_46810_tdfd_833452 crossref_primary_10_1080_00207543_2025_2461131 crossref_primary_10_1016_j_softx_2023_101383 crossref_primary_10_1016_j_jmsy_2022_08_013 crossref_primary_10_3390_s21144836 crossref_primary_10_1007_s40815_023_01605_y crossref_primary_10_1016_j_jmsy_2023_11_016 crossref_primary_10_1177_09544054221121934 crossref_primary_10_1016_j_cie_2024_110018 crossref_primary_10_1080_00207543_2024_2436126 crossref_primary_10_1016_j_aei_2024_102417 crossref_primary_10_1145_3590163 crossref_primary_10_1016_j_eswa_2021_116222 crossref_primary_10_1007_s10845_022_01915_2 crossref_primary_10_1016_j_knosys_2022_110083 crossref_primary_10_1016_j_simpat_2024_102948 crossref_primary_10_1360_SST_2022_0481 crossref_primary_10_1080_00207543_2024_2403112 crossref_primary_10_1080_00207543_2023_2252932 crossref_primary_10_1002_tee_23788 crossref_primary_10_3233_JIFS_222362 crossref_primary_10_1007_s10845_024_02484_2 crossref_primary_10_1007_s10845_021_01847_3 crossref_primary_10_1016_j_aei_2022_101776 crossref_primary_10_3389_fenrg_2023_1251335 crossref_primary_10_1016_j_jmsy_2023_09_009 crossref_primary_10_1080_23080477_2023_2187528 crossref_primary_10_1016_j_swevo_2024_101808 crossref_primary_10_1007_s10586_024_04970_x crossref_primary_10_1007_s40747_025_01828_6 crossref_primary_10_1016_j_is_2024_102492 crossref_primary_10_1016_j_engappai_2024_108487 crossref_primary_10_1080_00207543_2025_2481184 crossref_primary_10_1109_JIOT_2024_3386888 crossref_primary_10_1109_TSMC_2024_3446671 crossref_primary_10_1080_23302674_2024_2396432 crossref_primary_10_1007_s10845_024_02363_w crossref_primary_10_1016_j_ejor_2024_02_006 crossref_primary_10_1038_s41598_024_79593_8 crossref_primary_10_1049_cim2_12061 crossref_primary_10_1049_cim2_12060 crossref_primary_10_1038_s41598_024_71355_w crossref_primary_10_1007_s00170_023_12595_4 crossref_primary_10_1016_j_asoc_2024_112614 crossref_primary_10_1109_ACCESS_2021_3097254 crossref_primary_10_3390_machines10121169 crossref_primary_10_1016_j_cie_2023_109053 crossref_primary_10_1016_j_jmsy_2023_03_003 crossref_primary_10_1109_TETCI_2024_3402685 crossref_primary_10_1016_j_engappai_2025_110588 crossref_primary_10_1016_j_rcim_2024_102923 crossref_primary_10_3390_s21134553 crossref_primary_10_1016_j_cie_2023_109718 crossref_primary_10_1016_j_jclepro_2021_126489 crossref_primary_10_1016_j_engappai_2022_104976 crossref_primary_10_3390_pr11051571 crossref_primary_10_1016_j_cor_2023_106401 crossref_primary_10_1016_j_jmsy_2024_01_002 crossref_primary_10_3390_math13060932 crossref_primary_10_1007_s10479_025_06482_2 crossref_primary_10_1016_j_asoc_2023_110600 crossref_primary_10_1016_j_eswa_2023_121205 crossref_primary_10_1016_j_jmsy_2024_10_026 crossref_primary_10_53941_ijndi_2023_100015 crossref_primary_10_3390_machines10111001 crossref_primary_10_1007_s10791_024_09474_1 crossref_primary_10_1016_j_engappai_2024_107917 crossref_primary_10_1016_j_comnet_2024_110418 crossref_primary_10_3390_ma15197019 crossref_primary_10_1016_j_ijpe_2023_109102 crossref_primary_10_3390_machines10121195 crossref_primary_10_46740_alku_1390397 crossref_primary_10_1016_j_cor_2024_106914 crossref_primary_10_1016_j_jmsy_2022_10_019 crossref_primary_10_1016_j_eswa_2022_117796 crossref_primary_10_3390_pr11010267 crossref_primary_10_3389_fenvs_2022_1059451 crossref_primary_10_3233_JIFS_223827 crossref_primary_10_1016_j_swevo_2023_101414 crossref_primary_10_1007_s10489_022_03579_0 crossref_primary_10_1109_TII_2024_3371489 crossref_primary_10_3390_app14010049 crossref_primary_10_1016_j_eswa_2021_114666 crossref_primary_10_1016_j_cor_2024_106929 crossref_primary_10_1109_TII_2023_3272661 crossref_primary_10_1080_00207543_2022_2058432 crossref_primary_10_1016_j_aei_2022_101733 crossref_primary_10_1109_ACCESS_2024_3357969 crossref_primary_10_3390_pr9081391 crossref_primary_10_1016_j_rcim_2024_102917 crossref_primary_10_1016_j_cie_2023_109180 crossref_primary_10_1007_s40747_022_00844_0 crossref_primary_10_1016_j_swevo_2023_101421 crossref_primary_10_1007_s00170_021_07950_2 crossref_primary_10_3390_info15020082 crossref_primary_10_1016_j_cie_2023_109802 crossref_primary_10_1016_j_rcim_2025_103017 crossref_primary_10_1109_ACCESS_2024_3522107 crossref_primary_10_1080_00207543_2024_2328116 crossref_primary_10_1088_1742_6596_1848_1_012029 crossref_primary_10_1016_j_jmsy_2020_06_005 crossref_primary_10_1007_s10845_023_02094_4 crossref_primary_10_1016_j_jmsy_2023_06_007 crossref_primary_10_1016_j_asoc_2025_112898 crossref_primary_10_1016_j_rcim_2023_102605 crossref_primary_10_1109_TII_2022_3167380 crossref_primary_10_3390_electronics13183696 crossref_primary_10_3390_a14070211 crossref_primary_10_3390_app15010232 crossref_primary_10_1080_23302674_2025_2467782 crossref_primary_10_1061_JCCEE5_CPENG_6042 crossref_primary_10_1515_astro_2022_0033 crossref_primary_10_1016_j_aei_2024_102392 crossref_primary_10_1007_s12065_023_00885_5 crossref_primary_10_1016_j_cie_2023_109255 crossref_primary_10_1109_TII_2023_3282313 crossref_primary_10_1016_j_cie_2024_110646 crossref_primary_10_23919_CSMS_2021_0024 crossref_primary_10_1016_j_jii_2024_100582 crossref_primary_10_1016_j_swevo_2024_101550 crossref_primary_10_23919_CSMS_2021_0027 crossref_primary_10_1016_j_ifacol_2021_08_093 crossref_primary_10_1109_TETCI_2021_3098354 crossref_primary_10_1016_j_aei_2023_102307 crossref_primary_10_1016_j_jmsy_2024_08_006 crossref_primary_10_1016_j_cor_2024_106744 crossref_primary_10_1016_j_asoc_2025_112787 crossref_primary_10_1007_s10845_024_02470_8 crossref_primary_10_1016_j_eswa_2022_117489 crossref_primary_10_1080_00207543_2023_2245918 crossref_primary_10_1016_j_engappai_2024_107893 crossref_primary_10_1016_j_mlwa_2022_100445 crossref_primary_10_1016_j_cie_2025_110863 crossref_primary_10_1080_0305215X_2022_2141236 crossref_primary_10_1016_j_rcim_2025_102959 crossref_primary_10_54392_irjmt24614 crossref_primary_10_1109_TASE_2021_3104716 crossref_primary_10_1016_j_aei_2025_103195 crossref_primary_10_1016_j_cor_2022_105823 crossref_primary_10_3934_mbe_2024062 crossref_primary_10_32604_cmc_2022_030803 crossref_primary_10_3390_su16083234 crossref_primary_10_1016_j_jmsy_2024_04_028 crossref_primary_10_23919_CSMS_2022_0007 crossref_primary_10_1080_0305215X_2024_2437004 crossref_primary_10_1016_j_dsp_2022_103419 crossref_primary_10_1016_j_swevo_2024_101658 crossref_primary_10_3390_app13148483 crossref_primary_10_1080_0951192X_2024_2343677 crossref_primary_10_1109_JIOT_2023_3283056 crossref_primary_10_1109_TCYB_2021_3128075 crossref_primary_10_1016_j_ejor_2022_11_034 crossref_primary_10_1016_j_asoc_2023_110658 crossref_primary_10_1016_j_jmsy_2025_01_011 crossref_primary_10_1016_j_cie_2025_110856 crossref_primary_10_1016_j_asoc_2021_107212 crossref_primary_10_3390_su132313016 crossref_primary_10_1016_j_eswa_2022_117460 crossref_primary_10_1111_bjet_13429 crossref_primary_10_1016_j_cor_2023_106306 crossref_primary_10_1016_j_jmsy_2022_11_001 crossref_primary_10_3390_info13060286 crossref_primary_10_3390_electronics12234752 crossref_primary_10_1080_00207543_2021_1973138 crossref_primary_10_1016_j_asoc_2025_112764 crossref_primary_10_3390_machines10090759 crossref_primary_10_1080_21681015_2021_1883135 crossref_primary_10_1007_s10845_023_02249_3 crossref_primary_10_1016_j_procir_2023_09_012 crossref_primary_10_1007_s10845_024_02454_8 crossref_primary_10_1016_j_knosys_2024_111940 crossref_primary_10_1016_j_asoc_2024_111937 crossref_primary_10_3389_fieng_2024_1337174 crossref_primary_10_1007_s10489_022_03963_w crossref_primary_10_3390_s21031019 |
Cites_doi | 10.1007/s12652-016-0370-7 10.1016/S0272-6963(96)00090-3 10.1016/S0377-2217(98)00023-X 10.1016/S0921-8890(00)00087-7 10.1016/j.cie.2017.05.026 10.1007/s10951-008-0090-8 10.1007/s10845-012-0626-9 10.1016/j.eswa.2015.06.004 10.1016/j.cie.2016.12.020 10.1080/00207540802662896 10.1016/j.ijpe.2011.04.020 10.1287/moor.1.2.117 10.1038/nature14236 10.1080/00207543.2017.1306134 10.1080/00207543.2011.571443 10.1007/s001700070008 10.1109/70.678447 10.1609/aaai.v30i1.10295 10.1016/j.ejor.2016.07.030 10.1016/j.ins.2014.11.036 10.1007/s00170-011-3482-4 10.1016/j.cie.2016.03.011 10.1016/j.ifacol.2017.08.2354 10.1016/j.nancom.2018.02.003 10.1504/IJMOR.2019.097759 10.1080/05695557708975127 10.1016/j.rcim.2004.07.003 10.1080/095119299130443 10.1016/j.ifacol.2018.08.357 10.1016/j.cie.2018.03.039 10.1016/j.procir.2018.03.212 |
ContentType | Journal Article |
Copyright | 2020 Elsevier B.V. |
Copyright_xml | – notice: 2020 Elsevier B.V. |
DBID | AAYXX CITATION |
DOI | 10.1016/j.asoc.2020.106208 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1872-9681 |
ExternalDocumentID | 10_1016_j_asoc_2020_106208 S1568494620301484 |
GroupedDBID | --K --M .DC .~1 0R~ 1B1 1~. 1~5 23M 4.4 457 4G. 53G 5GY 5VS 6J9 7-5 71M 8P~ AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXUO AAYFN ABBOA ABFNM ABFRF ABJNI ABMAC ABXDB ABYKQ ACDAQ ACGFO ACGFS ACNNM ACRLP ACZNC ADBBV ADEZE ADJOM ADMUD ADTZH AEBSH AECPX AEFWE AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ HVGLF HZ~ IHE J1W JJJVA KOM M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SDF SDG SES SEW SPC SPCBC SST SSV SSZ T5K UHS UNMZH ~G- AATTM AAXKI AAYWO AAYXX ABWVN ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AFXIZ AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP BNPGV CITATION SSH |
ID | FETCH-LOGICAL-c300t-7cc9e50e543c4896a37de029448d77147e52f5b376391fafc16a3f603d1d4c9f3 |
IEDL.DBID | .~1 |
ISSN | 1568-4946 |
IngestDate | Thu Apr 24 22:56:12 EDT 2025 Tue Jul 01 01:50:05 EDT 2025 Fri Feb 23 02:47:15 EST 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Deep Q network New job insertion Deep reinforcement learning Dispatching rules Flexible job shop scheduling |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c300t-7cc9e50e543c4896a37de029448d77147e52f5b376391fafc16a3f603d1d4c9f3 |
ORCID | 0000-0002-8251-865X |
ParticipantIDs | crossref_citationtrail_10_1016_j_asoc_2020_106208 crossref_primary_10_1016_j_asoc_2020_106208 elsevier_sciencedirect_doi_10_1016_j_asoc_2020_106208 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | June 2020 2020-06-00 |
PublicationDateYYYYMMDD | 2020-06-01 |
PublicationDate_xml | – month: 06 year: 2020 text: June 2020 |
PublicationDecade | 2020 |
PublicationTitle | Applied soft computing |
PublicationYear | 2020 |
Publisher | Elsevier B.V |
Publisher_xml | – name: Elsevier B.V |
References | Mnih, Kavukcuoglu, Silver, Rusu, Veness, Bellemare, Graves, Riedmiller, Fidjeland, Ostrovski (b45) 2015; 518 Lillicrap, Hunt, Pritzel, Heess, Erez, Tassa, Silver, Wierstra (b48) 2015 Shen, Yao (b35) 2015; 298 Howard (b7) 1960 Nelson, Holloway, Wong (b33) 1977; 9 Chen, Hao, Lin, Murata (b12) 2010 Al-Hinai, ElMekkawy (b40) 2011; 132 Aydin, Öztemel (b10) 2000; 33 Gabel, Riedmiller (b13) 2012; 50 Mirhoseini, Pham, Le, Steiner, Larsen, Zhou, Kumar, Norouzi, Bengio, Dean (b24) 2017 P. Michael, Scheduling, theory, algorithms, and systems, Englewood Cli s, New Jersey, 1995. Khalil, Dai, Zhang, Dilkina, Song (b23) 2017 Sutton, Barto (b8) 2018 Kundakcı, Kulak (b5) 2016; 96 Shiue, Lee, Su (b17) 2018 Wang (b16) 2018 Xiong, Fan, Jiang, Li (b32) 2017; 257 Lu, Li, Gao, Liao, Yi (b49) 2017; 104 Mehta, Uzsoy (b38) 1998; 14 Shahrabi, Adibi, Mahootchi (b15) 2017; 110 Wang, Usher (b11) 2004; 20 Waschneck, Reichstaller, Belzner, Altenmüller, Bauernhansl, Knapp, Kyek (b22) 2018; 72 Gabel, Riedmiller (b29) 2008; 24 Bellman (b44) 1957 Cao, Zhang, Liu (b21) 2019 Baykasoğlu, Karaslan (b34) 2017; 55 Nouiri, Bekrar, Trentesaux (b37) 2018; 51 Buddala, Mahapatra (b41) 2018 Mehta (b25) 1999; 12 H. Van Hasselt, A. Guez, D. Silver, Deep reinforcement learning with double Q-learning, in: AAAI, vol. 2, Phoenix, AZ, 2016, p. 5. Wang, Xie, Xia, Zhang (b43) 2018 Garey, Johnson, Sethi (b2) 1976; 1 S. Riedmiller, M. Riedmiller, A neural reinforcement learning approach to learn local dispatching policies in production scheduling, in: IJCAI, vol. 2, 1999, pp. 764–771. Li, Cai, Liu, Lin, Wang (b18) 2018; 16 Subramaniam, Lee, Ramesh, Hong, Wong (b28) 2000; 16 Gao, Suganthan, Chua, Chong, Cai, Pan (b1) 2015; 42 Lou, Liu, Zhou, Wang, Sun (b4) 2012; 59 Matlab, version 9.1 (R2016b), 2016. Nie, Gao, Li, Li (b31) 2013; 24 Tao, Ming, Xu, Hua (b6) 2016; 7 Shahgholi Zadeh, Katebi, Doniavi (b36) 2018 Ouelhadj, Petrovic (b3) 2009; 12 Hasselt (b46) 2010 Lawrence, Sewell (b27) 1997; 15 Mnih, Kavukcuoglu, Silver, Graves, Antonoglou, Wierstra, Riedmiller (b20) 2013 Sajadi, Alizadeh, Zandieh, Tavan (b42) 2019; 14 Rajendran, Holthaus (b26) 1999; 116 Li (b19) 2017 Zandieh, Adibi (b30) 2010; 48 Bouazza, Sallez, Beldjilali (b14) 2017; 50 Mnih (10.1016/j.asoc.2020.106208_b45) 2015; 518 Wang (10.1016/j.asoc.2020.106208_b11) 2004; 20 Khalil (10.1016/j.asoc.2020.106208_b23) 2017 Lou (10.1016/j.asoc.2020.106208_b4) 2012; 59 Lillicrap (10.1016/j.asoc.2020.106208_b48) 2015 Bouazza (10.1016/j.asoc.2020.106208_b14) 2017; 50 Lu (10.1016/j.asoc.2020.106208_b49) 2017; 104 10.1016/j.asoc.2020.106208_b39 Nie (10.1016/j.asoc.2020.106208_b31) 2013; 24 Nouiri (10.1016/j.asoc.2020.106208_b37) 2018; 51 Howard (10.1016/j.asoc.2020.106208_b7) 1960 Bellman (10.1016/j.asoc.2020.106208_b44) 1957 Nelson (10.1016/j.asoc.2020.106208_b33) 1977; 9 Gabel (10.1016/j.asoc.2020.106208_b13) 2012; 50 Shiue (10.1016/j.asoc.2020.106208_b17) 2018 Subramaniam (10.1016/j.asoc.2020.106208_b28) 2000; 16 Sutton (10.1016/j.asoc.2020.106208_b8) 2018 10.1016/j.asoc.2020.106208_b47 Sajadi (10.1016/j.asoc.2020.106208_b42) 2019; 14 Xiong (10.1016/j.asoc.2020.106208_b32) 2017; 257 Waschneck (10.1016/j.asoc.2020.106208_b22) 2018; 72 Mehta (10.1016/j.asoc.2020.106208_b38) 1998; 14 Li (10.1016/j.asoc.2020.106208_b18) 2018; 16 Gao (10.1016/j.asoc.2020.106208_b1) 2015; 42 Mirhoseini (10.1016/j.asoc.2020.106208_b24) 2017 Shahrabi (10.1016/j.asoc.2020.106208_b15) 2017; 110 Mehta (10.1016/j.asoc.2020.106208_b25) 1999; 12 Al-Hinai (10.1016/j.asoc.2020.106208_b40) 2011; 132 Lawrence (10.1016/j.asoc.2020.106208_b27) 1997; 15 Garey (10.1016/j.asoc.2020.106208_b2) 1976; 1 Cao (10.1016/j.asoc.2020.106208_b21) 2019 Tao (10.1016/j.asoc.2020.106208_b6) 2016; 7 Mnih (10.1016/j.asoc.2020.106208_b20) 2013 Wang (10.1016/j.asoc.2020.106208_b16) 2018 Shahgholi Zadeh (10.1016/j.asoc.2020.106208_b36) 2018 10.1016/j.asoc.2020.106208_b50 10.1016/j.asoc.2020.106208_b9 Baykasoğlu (10.1016/j.asoc.2020.106208_b34) 2017; 55 Shen (10.1016/j.asoc.2020.106208_b35) 2015; 298 Hasselt (10.1016/j.asoc.2020.106208_b46) 2010 Ouelhadj (10.1016/j.asoc.2020.106208_b3) 2009; 12 Aydin (10.1016/j.asoc.2020.106208_b10) 2000; 33 Li (10.1016/j.asoc.2020.106208_b19) 2017 Chen (10.1016/j.asoc.2020.106208_b12) 2010 Buddala (10.1016/j.asoc.2020.106208_b41) 2018 Kundakcı (10.1016/j.asoc.2020.106208_b5) 2016; 96 Wang (10.1016/j.asoc.2020.106208_b43) 2018 Gabel (10.1016/j.asoc.2020.106208_b29) 2008; 24 Rajendran (10.1016/j.asoc.2020.106208_b26) 1999; 116 Zandieh (10.1016/j.asoc.2020.106208_b30) 2010; 48 |
References_xml | – year: 2018 ident: b43 article-title: A NSGA-II algorithm hybridizing local simulated-annealing operators for a bicriteria robust job-shop scheduling problem under scenarios publication-title: IEEE Trans. Fuzzy Syst. – year: 2013 ident: b20 article-title: Playing atari with deep reinforcement learning – volume: 16 start-page: 81 year: 2018 end-page: 90 ident: b18 article-title: Deep reinforcement learning: Algorithm, applications, and ultra-low-power implementation publication-title: Nano Commun. Netw. – reference: P. Michael, Scheduling, theory, algorithms, and systems, Englewood Cli s, New Jersey, 1995. – volume: 50 start-page: 15890 year: 2017 end-page: 15895 ident: b14 article-title: A distributed approach solving partially flexible job-shop scheduling problem with a Q-learning effect publication-title: IFAC-PapersOnLine – start-page: 2613 year: 2010 end-page: 2621 ident: b46 article-title: Double q-learning publication-title: Advances in Neural Information Processing Systems – reference: H. Van Hasselt, A. Guez, D. Silver, Deep reinforcement learning with double Q-learning, in: AAAI, vol. 2, Phoenix, AZ, 2016, p. 5. – year: 1960 ident: b7 article-title: Dynamic Programming and Markov Processes – start-page: 2430 year: 2017 end-page: 2439 ident: b24 article-title: Device placement optimization with reinforcement learning publication-title: Proceedings of the 34th International Conference on Machine Learning-Volume 70 – volume: 12 start-page: 417 year: 2009 end-page: 431 ident: b3 article-title: A survey of dynamic scheduling in manufacturing systems publication-title: J. Sched. – year: 2018 ident: b17 article-title: Real-time scheduling for a smart factory using a reinforcement learning approach publication-title: Comput. Ind. Eng. – volume: 15 start-page: 71 year: 1997 end-page: 82 ident: b27 article-title: Heuristic, optimal, static, and dynamic schedules when processing times are uncertain publication-title: J. Oper. Manage. – volume: 132 start-page: 279 year: 2011 end-page: 291 ident: b40 article-title: Robust and stable flexible job shop scheduling with random machine breakdowns using a hybrid genetic algorithm publication-title: Int. J. Prod. Econ. – volume: 110 start-page: 75 year: 2017 end-page: 82 ident: b15 article-title: A reinforcement learning approach to parameter estimation in dynamic job shop scheduling publication-title: Comput. Ind. Eng. – start-page: 6348 year: 2017 end-page: 6358 ident: b23 article-title: Learning combinatorial optimization algorithms over graphs publication-title: Advances in Neural Information Processing Systems – start-page: 1 year: 2018 end-page: 14 ident: b41 article-title: Two-stage teaching-learning-based optimization method for flexible job-shop scheduling under machine breakdown publication-title: Int. J. Adv. Manuf. Technol. – start-page: 1 year: 2018 end-page: 16 ident: b36 article-title: A heuristic model for dynamic flexible job shop scheduling problem considering variable processing times publication-title: Int. J. Prod. Res. – reference: S. Riedmiller, M. Riedmiller, A neural reinforcement learning approach to learn local dispatching policies in production scheduling, in: IJCAI, vol. 2, 1999, pp. 764–771. – volume: 12 start-page: 15 year: 1999 end-page: 38 ident: b25 article-title: Predictable scheduling of a single machine subject to breakdowns publication-title: Int. J. Comput. Integr. Manuf. – volume: 24 year: 2008 ident: b29 article-title: Adaptive reactive job-shop scheduling with reinforcement learning agents publication-title: Int. J. Inf. Technol. Intell. Comput. – volume: 298 start-page: 198 year: 2015 end-page: 224 ident: b35 article-title: Mathematical modeling and multi-objective evolutionary algorithms applied to dynamic flexible job shop scheduling problems publication-title: Inform. Sci. – volume: 104 start-page: 156 year: 2017 end-page: 174 ident: b49 article-title: An effective multi-objective discrete virus optimization algorithm for flexible job-shop scheduling problem with controllable processing times publication-title: Comput. Ind. Eng. – volume: 20 start-page: 553 year: 2004 end-page: 562 ident: b11 article-title: Learning policies for single machine job dispatching publication-title: Robot. Comput.-Integr. Manuf. – year: 2019 ident: b21 article-title: A deep reinforcement learning approach to multi-component job scheduling in edge computing – year: 2018 ident: b8 article-title: Reinforcement Learning: An Introduction – volume: 33 start-page: 169 year: 2000 end-page: 178 ident: b10 article-title: Dynamic job-shop scheduling using reinforcement learning agents publication-title: Robot. Auton. Syst. – volume: 257 start-page: 13 year: 2017 end-page: 24 ident: b32 article-title: A simulation-based study of dispatching rules in a dynamic job shop scheduling problem with batch release and extended technical precedence constraints publication-title: European J. Oper. Res. – start-page: 396 year: 2010 end-page: 401 ident: b12 article-title: Rule driven multi objective dynamic scheduling by data envelopment analysis and reinforcement learning publication-title: 2010 IEEE International Conference on Automation and Logistics – volume: 14 start-page: 365 year: 1998 end-page: 378 ident: b38 article-title: Predictable scheduling of a job shop subject to breakdowns publication-title: IEEE Trans. Robot. Autom. – volume: 59 start-page: 311 year: 2012 end-page: 324 ident: b4 article-title: Multi-agent-based proactive–reactive scheduling for a job shop publication-title: Int. J. Adv. Manuf. Technol. – volume: 16 start-page: 902 year: 2000 end-page: 908 ident: b28 article-title: Machine selection rules in a dynamic job shop publication-title: Int. J. Adv. Manuf. Technol. – volume: 24 start-page: 763 year: 2013 end-page: 774 ident: b31 article-title: A GEP-based reactive scheduling policies constructing approach for dynamic flexible job shop scheduling problem with job release dates publication-title: J. Intell. Manuf. – year: 2015 ident: b48 article-title: Continuous control with deep reinforcement learning – volume: 50 start-page: 41 year: 2012 end-page: 61 ident: b13 article-title: Distributed policy search reinforcement learning for job-shop scheduling tasks publication-title: Int. J. Prod. Res. – volume: 48 start-page: 2449 year: 2010 end-page: 2458 ident: b30 article-title: Dynamic job shop scheduling using variable neighbourhood search publication-title: Int. J. Prod. Res. – start-page: 679 year: 1957 end-page: 684 ident: b44 article-title: A Markovian decision process publication-title: J. Math. Mech. – year: 2017 ident: b19 article-title: Deep reinforcement learning: An overview – volume: 7 start-page: 721 year: 2016 end-page: 729 ident: b6 article-title: A novel dynamic scheduling strategy for solving flexible job-shop problems publication-title: J. Ambient Intell. Human. Comput. – volume: 14 start-page: 268 year: 2019 end-page: 289 ident: b42 article-title: Robust and stable flexible job shop scheduling with random machine breakdowns: multi-objectives genetic algorithm approach publication-title: Int. J. Math. Oper. Res. – volume: 518 start-page: 529 year: 2015 ident: b45 article-title: Human-level control through deep reinforcement learning publication-title: Nature – volume: 116 start-page: 156 year: 1999 end-page: 170 ident: b26 article-title: A comparative study of dispatching rules in dynamic flowshops and jobshops publication-title: European J. Oper. Res. – volume: 51 start-page: 1275 year: 2018 end-page: 1280 ident: b37 article-title: Towards energy efficient scheduling and rescheduling for dynamic flexible job shop problem publication-title: IFAC-PapersOnLine – start-page: 1 year: 2018 end-page: 16 ident: b16 article-title: Adaptive job shop scheduling strategy based on weighted q-learning algorithm publication-title: J. Intell. Manuf. – volume: 9 start-page: 95 year: 1977 end-page: 102 ident: b33 article-title: Centralized scheduling and priority implementation heuristics for a dynamic job shop model publication-title: AIIE Trans. – volume: 1 start-page: 117 year: 1976 end-page: 129 ident: b2 article-title: The complexity of flowshop and jobshop scheduling publication-title: Math. Oper. Res. – volume: 42 start-page: 7652 year: 2015 end-page: 7663 ident: b1 article-title: A two-stage artificial bee colony algorithm scheduling flexible job-shop scheduling problem with new job insertion publication-title: Expert Syst. Appl. – reference: Matlab, version 9.1 (R2016b), 2016. – volume: 72 start-page: 1264 year: 2018 end-page: 1269 ident: b22 article-title: Optimization of global production scheduling with deep reinforcement learning publication-title: Proc. CIRP – volume: 96 start-page: 31 year: 2016 end-page: 51 ident: b5 article-title: Hybrid genetic algorithms for minimizing makespan in dynamic job shop scheduling problem publication-title: Comput. Ind. Eng. – volume: 55 start-page: 3308 year: 2017 end-page: 3325 ident: b34 article-title: Solving comprehensive dynamic job shop scheduling problem by using a GRASP-based approach publication-title: Int. J. Prod. Res. – ident: 10.1016/j.asoc.2020.106208_b39 – volume: 7 start-page: 721 issue: 5 year: 2016 ident: 10.1016/j.asoc.2020.106208_b6 article-title: A novel dynamic scheduling strategy for solving flexible job-shop problems publication-title: J. Ambient Intell. Human. Comput. doi: 10.1007/s12652-016-0370-7 – year: 2018 ident: 10.1016/j.asoc.2020.106208_b43 article-title: A NSGA-II algorithm hybridizing local simulated-annealing operators for a bicriteria robust job-shop scheduling problem under scenarios publication-title: IEEE Trans. Fuzzy Syst. – volume: 24 issue: 4 year: 2008 ident: 10.1016/j.asoc.2020.106208_b29 article-title: Adaptive reactive job-shop scheduling with reinforcement learning agents publication-title: Int. J. Inf. Technol. Intell. Comput. – start-page: 1 year: 2018 ident: 10.1016/j.asoc.2020.106208_b16 article-title: Adaptive job shop scheduling strategy based on weighted q-learning algorithm publication-title: J. Intell. Manuf. – volume: 15 start-page: 71 issue: 1 year: 1997 ident: 10.1016/j.asoc.2020.106208_b27 article-title: Heuristic, optimal, static, and dynamic schedules when processing times are uncertain publication-title: J. Oper. Manage. doi: 10.1016/S0272-6963(96)00090-3 – volume: 116 start-page: 156 issue: 1 year: 1999 ident: 10.1016/j.asoc.2020.106208_b26 article-title: A comparative study of dispatching rules in dynamic flowshops and jobshops publication-title: European J. Oper. Res. doi: 10.1016/S0377-2217(98)00023-X – year: 2013 ident: 10.1016/j.asoc.2020.106208_b20 – year: 2017 ident: 10.1016/j.asoc.2020.106208_b19 – start-page: 2430 year: 2017 ident: 10.1016/j.asoc.2020.106208_b24 article-title: Device placement optimization with reinforcement learning – start-page: 1 year: 2018 ident: 10.1016/j.asoc.2020.106208_b41 article-title: Two-stage teaching-learning-based optimization method for flexible job-shop scheduling under machine breakdown publication-title: Int. J. Adv. Manuf. Technol. – start-page: 1 year: 2018 ident: 10.1016/j.asoc.2020.106208_b36 article-title: A heuristic model for dynamic flexible job shop scheduling problem considering variable processing times publication-title: Int. J. Prod. Res. – volume: 33 start-page: 169 issue: 2–3 year: 2000 ident: 10.1016/j.asoc.2020.106208_b10 article-title: Dynamic job-shop scheduling using reinforcement learning agents publication-title: Robot. Auton. Syst. doi: 10.1016/S0921-8890(00)00087-7 – volume: 110 start-page: 75 year: 2017 ident: 10.1016/j.asoc.2020.106208_b15 article-title: A reinforcement learning approach to parameter estimation in dynamic job shop scheduling publication-title: Comput. Ind. Eng. doi: 10.1016/j.cie.2017.05.026 – volume: 12 start-page: 417 issue: 4 year: 2009 ident: 10.1016/j.asoc.2020.106208_b3 article-title: A survey of dynamic scheduling in manufacturing systems publication-title: J. Sched. doi: 10.1007/s10951-008-0090-8 – year: 2018 ident: 10.1016/j.asoc.2020.106208_b8 – volume: 24 start-page: 763 issue: 4 year: 2013 ident: 10.1016/j.asoc.2020.106208_b31 article-title: A GEP-based reactive scheduling policies constructing approach for dynamic flexible job shop scheduling problem with job release dates publication-title: J. Intell. Manuf. doi: 10.1007/s10845-012-0626-9 – volume: 42 start-page: 7652 issue: 21 year: 2015 ident: 10.1016/j.asoc.2020.106208_b1 article-title: A two-stage artificial bee colony algorithm scheduling flexible job-shop scheduling problem with new job insertion publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2015.06.004 – start-page: 2613 year: 2010 ident: 10.1016/j.asoc.2020.106208_b46 article-title: Double q-learning – ident: 10.1016/j.asoc.2020.106208_b9 – year: 2019 ident: 10.1016/j.asoc.2020.106208_b21 – volume: 104 start-page: 156 year: 2017 ident: 10.1016/j.asoc.2020.106208_b49 article-title: An effective multi-objective discrete virus optimization algorithm for flexible job-shop scheduling problem with controllable processing times publication-title: Comput. Ind. Eng. doi: 10.1016/j.cie.2016.12.020 – start-page: 6348 year: 2017 ident: 10.1016/j.asoc.2020.106208_b23 article-title: Learning combinatorial optimization algorithms over graphs – volume: 48 start-page: 2449 issue: 8 year: 2010 ident: 10.1016/j.asoc.2020.106208_b30 article-title: Dynamic job shop scheduling using variable neighbourhood search publication-title: Int. J. Prod. Res. doi: 10.1080/00207540802662896 – volume: 132 start-page: 279 issue: 2 year: 2011 ident: 10.1016/j.asoc.2020.106208_b40 article-title: Robust and stable flexible job shop scheduling with random machine breakdowns using a hybrid genetic algorithm publication-title: Int. J. Prod. Econ. doi: 10.1016/j.ijpe.2011.04.020 – volume: 1 start-page: 117 issue: 2 year: 1976 ident: 10.1016/j.asoc.2020.106208_b2 article-title: The complexity of flowshop and jobshop scheduling publication-title: Math. Oper. Res. doi: 10.1287/moor.1.2.117 – year: 2015 ident: 10.1016/j.asoc.2020.106208_b48 – volume: 518 start-page: 529 issue: 7540 year: 2015 ident: 10.1016/j.asoc.2020.106208_b45 article-title: Human-level control through deep reinforcement learning publication-title: Nature doi: 10.1038/nature14236 – volume: 55 start-page: 3308 issue: 11 year: 2017 ident: 10.1016/j.asoc.2020.106208_b34 article-title: Solving comprehensive dynamic job shop scheduling problem by using a GRASP-based approach publication-title: Int. J. Prod. Res. doi: 10.1080/00207543.2017.1306134 – volume: 50 start-page: 41 issue: 1 year: 2012 ident: 10.1016/j.asoc.2020.106208_b13 article-title: Distributed policy search reinforcement learning for job-shop scheduling tasks publication-title: Int. J. Prod. Res. doi: 10.1080/00207543.2011.571443 – volume: 16 start-page: 902 issue: 12 year: 2000 ident: 10.1016/j.asoc.2020.106208_b28 article-title: Machine selection rules in a dynamic job shop publication-title: Int. J. Adv. Manuf. Technol. doi: 10.1007/s001700070008 – volume: 14 start-page: 365 issue: 3 year: 1998 ident: 10.1016/j.asoc.2020.106208_b38 article-title: Predictable scheduling of a job shop subject to breakdowns publication-title: IEEE Trans. Robot. Autom. doi: 10.1109/70.678447 – year: 1960 ident: 10.1016/j.asoc.2020.106208_b7 – ident: 10.1016/j.asoc.2020.106208_b47 doi: 10.1609/aaai.v30i1.10295 – volume: 257 start-page: 13 issue: 1 year: 2017 ident: 10.1016/j.asoc.2020.106208_b32 article-title: A simulation-based study of dispatching rules in a dynamic job shop scheduling problem with batch release and extended technical precedence constraints publication-title: European J. Oper. Res. doi: 10.1016/j.ejor.2016.07.030 – volume: 298 start-page: 198 year: 2015 ident: 10.1016/j.asoc.2020.106208_b35 article-title: Mathematical modeling and multi-objective evolutionary algorithms applied to dynamic flexible job shop scheduling problems publication-title: Inform. Sci. doi: 10.1016/j.ins.2014.11.036 – volume: 59 start-page: 311 issue: 1–4 year: 2012 ident: 10.1016/j.asoc.2020.106208_b4 article-title: Multi-agent-based proactive–reactive scheduling for a job shop publication-title: Int. J. Adv. Manuf. Technol. doi: 10.1007/s00170-011-3482-4 – volume: 96 start-page: 31 year: 2016 ident: 10.1016/j.asoc.2020.106208_b5 article-title: Hybrid genetic algorithms for minimizing makespan in dynamic job shop scheduling problem publication-title: Comput. Ind. Eng. doi: 10.1016/j.cie.2016.03.011 – volume: 50 start-page: 15890 issue: 1 year: 2017 ident: 10.1016/j.asoc.2020.106208_b14 article-title: A distributed approach solving partially flexible job-shop scheduling problem with a Q-learning effect publication-title: IFAC-PapersOnLine doi: 10.1016/j.ifacol.2017.08.2354 – volume: 16 start-page: 81 year: 2018 ident: 10.1016/j.asoc.2020.106208_b18 article-title: Deep reinforcement learning: Algorithm, applications, and ultra-low-power implementation publication-title: Nano Commun. Netw. doi: 10.1016/j.nancom.2018.02.003 – volume: 14 start-page: 268 issue: 2 year: 2019 ident: 10.1016/j.asoc.2020.106208_b42 article-title: Robust and stable flexible job shop scheduling with random machine breakdowns: multi-objectives genetic algorithm approach publication-title: Int. J. Math. Oper. Res. doi: 10.1504/IJMOR.2019.097759 – volume: 9 start-page: 95 issue: 1 year: 1977 ident: 10.1016/j.asoc.2020.106208_b33 article-title: Centralized scheduling and priority implementation heuristics for a dynamic job shop model publication-title: AIIE Trans. doi: 10.1080/05695557708975127 – volume: 20 start-page: 553 issue: 6 year: 2004 ident: 10.1016/j.asoc.2020.106208_b11 article-title: Learning policies for single machine job dispatching publication-title: Robot. Comput.-Integr. Manuf. doi: 10.1016/j.rcim.2004.07.003 – volume: 12 start-page: 15 issue: 1 year: 1999 ident: 10.1016/j.asoc.2020.106208_b25 article-title: Predictable scheduling of a single machine subject to breakdowns publication-title: Int. J. Comput. Integr. Manuf. doi: 10.1080/095119299130443 – ident: 10.1016/j.asoc.2020.106208_b50 – start-page: 396 year: 2010 ident: 10.1016/j.asoc.2020.106208_b12 article-title: Rule driven multi objective dynamic scheduling by data envelopment analysis and reinforcement learning – volume: 51 start-page: 1275 issue: 11 year: 2018 ident: 10.1016/j.asoc.2020.106208_b37 article-title: Towards energy efficient scheduling and rescheduling for dynamic flexible job shop problem publication-title: IFAC-PapersOnLine doi: 10.1016/j.ifacol.2018.08.357 – start-page: 679 year: 1957 ident: 10.1016/j.asoc.2020.106208_b44 article-title: A Markovian decision process publication-title: J. Math. Mech. – year: 2018 ident: 10.1016/j.asoc.2020.106208_b17 article-title: Real-time scheduling for a smart factory using a reinforcement learning approach publication-title: Comput. Ind. Eng. doi: 10.1016/j.cie.2018.03.039 – volume: 72 start-page: 1264 issue: 1 year: 2018 ident: 10.1016/j.asoc.2020.106208_b22 article-title: Optimization of global production scheduling with deep reinforcement learning publication-title: Proc. CIRP doi: 10.1016/j.procir.2018.03.212 |
SSID | ssj0016928 |
Score | 2.6708462 |
Snippet | In modern manufacturing industry, dynamic scheduling methods are urgently needed with the sharp increase of uncertainty and complexity in production process.... |
SourceID | crossref elsevier |
SourceType | Enrichment Source Index Database Publisher |
StartPage | 106208 |
SubjectTerms | Deep Q network Deep reinforcement learning Dispatching rules Flexible job shop scheduling New job insertion |
Title | Dynamic scheduling for flexible job shop with new job insertions by deep reinforcement learning |
URI | https://dx.doi.org/10.1016/j.asoc.2020.106208 |
Volume | 91 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEF5KvXjxLT7LHrxJbDbZbLrHUi31QRG10FvIvrSltKXWgxd_uzPJpihID55ClpkQvp3MYzMPQi7AQiZRzmEH4pQHPEp1oARjgXFWWLAZlpXdPvuiN-B3w2RYI52qFgbTKr3uL3V6oa39StOj2ZyPRs1niDxaXHIRFWFBC3uCcp6ilF99rdI8mJDFfFUkDpDaF86UOV45IAAxYoQL8JzW38bph8Hp7pAt7ynSdvkyu6Rmp3tku5rCQP1HuU-y63KoPIU4FewGlpdT8ESpw1aXamLpeKbo-9tsTvHMlYIbXayMpvgfHoWOqk9qrJ3ThS3aqOrixJD6eRKvB2TQvXnp9AI_NiHQcRgug1RraZPQJjzWvCVFHqcGYJcQiJk0ZTy1SeQShZpFMpc7zYDEiTA2zHAtXXxI6tPZ1B4RqnNmcpVIpUTOpTDKaRfGCry-yNiYm2PCKrwy7XuK42iLSVYlj40zxDhDjLMS42NyueKZlx011lIn1TZkv-QiA5W_hu_kn3ynZBPvymSwM1JfLj7sObgdS9Uo5KpBNtqdp4dHvN7e9_rfsCTYpA |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwELZKGWDhjShPD2woNE4cpx5RARUoXWilblb8glZVG5UysPDbOSdOBRLqwOrcRdFn-x7O-T6ELsFDJlFGYQbilAY0SlUgGSGBtoYZ8BmGlN0-e6wzoI_DZFhD7eoujCur9La_tOmFtfYjTY9mMx-Nmi-QebQopywq0oIWXUPrFLavozG4_lrWeRDGC4JVJx04cX9zpizyygACSBIjNwAvav3tnX54nPsdtOVDRXxTfs0uqpnpHtquaBiw35X7SNyWrPIYElVwHO5-OYZQFFvX61JODB7PJH5_m-XYHbpiiKOLkdHU_Yh3qw7LT6yNyfHcFH1UVXFkiD2hxOsBGtzf9dudwPMmBCoOw0WQKsVNEpqExoq2OMviVAPuHDIxnaaEpiaJbCKdaeHEZlYRELEsjDXRVHEbH6L6dDY1RwirjOhMJlxKllHOtLTKhrGEsC_SJqa6gUiFl1C-qbjjtpiIqnpsLBzGwmEsSowb6Gqpk5ctNVZKJ9U0iF8LQ4DNX6F3_E-9C7TR6T93Rfeh93SCNt2TsjLsFNUX8w9zBjHIQp4Xa-wbABDYnQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Dynamic+scheduling+for+flexible+job+shop+with+new+job+insertions+by+deep+reinforcement+learning&rft.jtitle=Applied+soft+computing&rft.au=Luo%2C+Shu&rft.date=2020-06-01&rft.issn=1568-4946&rft.volume=91&rft.spage=106208&rft_id=info:doi/10.1016%2Fj.asoc.2020.106208&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_asoc_2020_106208 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1568-4946&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1568-4946&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1568-4946&client=summon |