Focus and Shoot: Exploring Auto-Focus in RFID Tag Identification Towards a Specified Area
With the rapid proliferation of RFID technologies, RFID has been introduced into applications such as inventory and sampling inspection. Conventionally, in RFID systems, the reader usually identifies all the RFID tags in the interrogation region with the maximum power. However, some applications may...
Saved in:
Published in | IEEE transactions on computers Vol. 65; no. 3; pp. 888 - 901 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.03.2016
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | With the rapid proliferation of RFID technologies, RFID has been introduced into applications such as inventory and sampling inspection. Conventionally, in RFID systems, the reader usually identifies all the RFID tags in the interrogation region with the maximum power. However, some applications may only need to identify the tags in a specified area, which is usually smaller than the reader's default interrogation region. An example could be identifying the tags in a box, while ignoring the tags out of the box. In this paper, we respectively present two solutions to identify the tags in the specified area. The principle of the solutions can be compared to the picture-taking process of an auto-focus camera, which firstly focuses on the target automatically and then takes the picture. Similarly, our solutions first focus on the specified area and then shoot the tags. The design of the two solutions is based on the extensive empirical study on RFID tags. Realistic experiment results show that our solutions can reduce the execution time by <inline-formula><tex-math>44</tex-math> <inline-graphic xlink:type="simple" xlink:href="xie-ieq1-2435749.gif"/> </inline-formula> percent compared to the baseline solution, which identifies the tags with maximum power. Furthermore, we improve the proposed solutions to make them work well in more complex environments. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0018-9340 1557-9956 |
DOI: | 10.1109/TC.2015.2435749 |