DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Hyun-Jung | ko |
dc.contributor.author | Lee, Jun-Ho | ko |
dc.date.accessioned | 2022-04-22T01:00:58Z | - |
dc.date.available | 2022-04-22T01:00:58Z | - |
dc.date.created | 2022-01-04 | - |
dc.date.created | 2022-01-04 | - |
dc.date.issued | 2022-04 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, v.19, no.2, pp.1120 - 1136 | - |
dc.identifier.issn | 1545-5955 | - |
dc.identifier.uri | http://hdl.handle.net/10203/295838 | - |
dc.description.abstract | A dual-gripper robotic cell consists of multiple processing machines and one material handling robot, which can perform an unloading or a loading task one at a time but can hold two parts at the same time. We address a scheduling problem of the robotic cell that determines a robot task sequence when two part types are processed in a different set of machines and all machines have variable processing times within a given interval. The objective is to minimize the makespan. This study proposes a learning-based method, i.e., a reinforcement learning (RL) approach, for the first time, to address a dual-gripper robotic cell scheduling problem. The problem is modeled with a Petri net, a graphical and mathematical modeling tool, which is used as an environment in RL. The states, actions, and rewards are defined by using flow shop scheduling properties, features from a Petri net, and knowledge from previous studies of scheduling robotized tools. Then, the RL approach is compared to the first-in-first-out (FIFO) rule, which is generally used in practice, a swap sequence, which is widely used for cyclic scheduling of dual-gripper robotic cells, and a lower bound. The extensive experiments show that the proposed method performs better than FIFO and the swap sequence; moreover, the gap between the makespan of the proposed method and the lower bound is not large. | - |
dc.language | English | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Scheduling of Dual-Gripper Robotic Cells With Reinforcement Learning | - |
dc.type | Article | - |
dc.identifier.wosid | 000732917700001 | - |
dc.identifier.scopusid | 2-s2.0-85099733436 | - |
dc.type.rims | ART | - |
dc.citation.volume | 19 | - |
dc.citation.issue | 2 | - |
dc.citation.beginningpage | 1120 | - |
dc.citation.endingpage | 1136 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING | - |
dc.identifier.doi | 10.1109/TASE.2020.3047924 | - |
dc.contributor.localauthor | Kim, Hyun-Jung | - |
dc.contributor.nonIdAuthor | Lee, Jun-Ho | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article | - |
dc.subject.keywordAuthor | Robots | - |
dc.subject.keywordAuthor | Job shop scheduling | - |
dc.subject.keywordAuthor | Tools | - |
dc.subject.keywordAuthor | Task analysis | - |
dc.subject.keywordAuthor | Manufacturing | - |
dc.subject.keywordAuthor | Service robots | - |
dc.subject.keywordAuthor | Mathematical model | - |
dc.subject.keywordAuthor | Dual-gripper robotic cell | - |
dc.subject.keywordAuthor | reinforcement learning (RL) | - |
dc.subject.keywordAuthor | scheduling | - |
dc.subject.keywordAuthor | time variations | - |
dc.subject.keywordPlus | ARMED CLUSTER TOOLS | - |
dc.subject.keywordPlus | TIME ANALYSIS | - |
dc.subject.keywordPlus | BOUND ALGORITHM | - |
dc.subject.keywordPlus | COMPLETION-TIME | - |
dc.subject.keywordPlus | HOIST | - |
dc.subject.keywordPlus | PARTS | - |
dc.subject.keywordPlus | OPTIMIZATION | - |
dc.subject.keywordPlus | CONSTRAINTS | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.