While cache-friendly operations have seen remarkable improvements in performance efficiency over the past couple decades, cache-unfriendly operations have remained a persistent problem. During this time, big data has also grown to become more than essential in modern day industry. As applications in big data continuously require large datasets with unpredictable access patterns, it becomes more and more evident that modern day computing requires a competent solution to this issue. The Emu system architecture introduces a potential solution to the problem of performance efficiency in cache-unfriendly operations by introducing the novel concept of migratory threads. Rather than relying on conventional memory buses to retrieve data, this system sends the executing thread to the data location instead. By doing so, the Emu system avoids the memory access latency procured by most other architectures, giving it somewhat of an advantage for cache-unfriendly operations. By porting pre-existing kernel code onto this system, we can analyze the performance of this system and its speedup from other traditional architecture systems. Using this information, we can ultimately decide if the Emu system architecture is a worthy investment for high performance computing and the world of big data.
University of Illinois at Urbana-Champaign
Dr. Volodymyr Kindratenko
Department of Research Advisor:
Year of Publication: