Boosting Parallel Applications Performance on Applying DIM Technique in a Multiprocessing Environment

Limits of instruction-level parallelism and higher transistor density sustain the increasing need for multiprocessor systems: they are rapidly taking over both general-purpose and embedded processor domains. Current multiprocessing systems are composed either of many homogeneous and simple cores or...

Full description

Saved in:
Bibliographic Details
Published inInternational Journal of Reconfigurable Computing Vol. 2011; no. 2011; pp. 278 - 290
Main Authors Rutzig, Mateus B., Beck, Antonio C. S., Madruga, Felipe, Alves, Marco A., Freitas, Henrique C., Maillard, Nicolas, Navaux, Philippe O. A., Carro, Luigi
Format Journal Article
LanguageEnglish
Published Cairo, Egypt Hindawi Limiteds 01.01.2011
Hindawi Puplishing Corporation
Hindawi Publishing Corporation
Hindawi Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Limits of instruction-level parallelism and higher transistor density sustain the increasing need for multiprocessor systems: they are rapidly taking over both general-purpose and embedded processor domains. Current multiprocessing systems are composed either of many homogeneous and simple cores or of complex superscalar, simultaneous multithread processing elements. As parallel applications are becoming increasingly present in embedded and general-purpose domains and multiprocessing systems must handle a wide range of different application classes, there is no consensus over which are the best hardware solutions to better exploit instruction-level parallelism (TLP) and thread-level parallelism (TLP) together. Therefore, in this work, we have expanded the DIM (dynamic instruction merging) technique to be used in a multiprocessing scenario, proving the need for an adaptable ILP exploitation even in TLP architectures. We have successfully coupled a dynamic reconfigurable system to an SPARC-based multiprocessor and obtained performance gains of up to 40%, even for applications that show a great level of parallelism at thread level.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:1687-7195
1687-7209
DOI:10.1155/2011/546962