打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
UPC Tutorials
UPC Tutorials
  • Unified Parallel C Tutorial at PGAS09
    • Date: October 5, 2009
    • Presenters: Tarek El-Ghazawi, The George Washington University

    • Tutorial Material
      • Download the tutorial in a PDF format

  • High Performance Parallel Programming with Unified Parallel C at SC05
    • Date: November 2005
    • Presenters:  Tarek El-Ghazawi, The George Washington University; Phil Merkey, Steve Seidel, Michigan Technological University
    • Abstract:
      Parallel programming paradigms have been designed around three models-message passing, data parallel and shared memory. Shared-memory can simplify programming, as it provides a memory view similar to that of uniprocessors. Practical experience has shown that the programmer gets closer to the underlying hardware, higher performance execution could be achieved. Thus, designing parallel programming languages around a distributed shared-memory model has the promise of ease-of-programming as well as efficiency. Since programmers can exploit features such as memory locality in distributed memory systems. Furthermore, the use of an abstract distributed chared-memory model can lead to program portability and allow efficient compiler implementation in other parallel architectures.
      This tutorial discusses the distributed shared memory programming paradigm with emphasis on Unified Parallel C(UPC). The tutorial introduces users familar with C programming including those who has no expereince with parallel programming languages to the basic semantics of the UPC langauge with many UPC programs, examples and experimental results.

    • Tutorial Material
      • Download the tutorial in a PDF format

  • Programming in the Partitioned Global Address Space Model at SC2003
    • Date: November 2003
    • Presenters:  William Carlson, IDA Center for Computing Sciences; Tarek El-Ghazawi, The George Washington University;
      Bob Numrich, U.Minnesota; Kathy Yelick, University of California at Berkeley
    • Abstract:
      The partitioned global address space programming model, also known as the distributed shared address space model, has the potential to achieve a balance between ease-of-programming and performance. As in the shared-memory model, one thread may directly read and write memory allocated to another. At the same time, the model gives programmers control over features that are essential for performance, such as locality. The model is receiving rising attention and there are now several compilers for languages based on this model. This tutorial presents the concepts associated with this model inclduding execution , synchronization, workload distribution, and memory consistency models. Three parallel programming language instances have been introduced. These are Unified Parallel C or UPC; Co-Array FORTRAN, and Titanium, a JAVA-based language. It will be shown through experimental studies that these paradigms can deliver performance comparable with message passing, while maintaining the ease of programming of the shared memory model.

    • Tutorial Material
      • Download the tutorial in a PDF format

  • Programming With the Distributed Shared-Memory Model at SC2001
    • Date: November 2001
    • Presenters:  William Carlson, IDA Center for Computing Sciences; Tarek El-Ghazawi, The George Washington University;
      Bob Numrich, U.Minnesota; Kathy Yelick, University of California at Berkeley
    • Abstract:
      The distributed shared-memory programming paradigm has been receiving rising attention. Recent developments have resulted in viable distributed shared memory languages that are gaining vendors support, and several early compilers have been developed. This programming model has the potential of achieving a balance between ease-of-programming and performance. As in the shared-memory model, programmers need not to explicitly specify data accesses. Meanwhile, programmers can exploit data locality using a model that enables the placement of data close to the threads that process them, to reduce remote memory accesses.
      In this tutorial, we present the fundamental concepts associated with this programming model. These include execution models, synchronization, workload distribution, and memory consistency. We then introduce the syntax and semantics of three parallel programming language instances with growing interest. These are the Unified Parallel C or UPC, a parallel extension to ANSI C which is developed by a consortium of academia, industry, and government; Co-Array FORTRAN, which is developed at Cray; and Titanium, a JAVA implementation from UCB. It will be shown through experimental case studies that optimized distributed shared memory programs can be competitive with message passing codes, without significant departure from the ease of programming of the shared memory model

    • Tutorial Material
      • Download the tutorial in a PDF format

本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
Readings in Databases by Reynold Xin
OpenCL? and the AMD APP SDK
ANSYS9.0并行计算(Parallel Performance for ANSYS9....
The von Neumann Architecture of Computer Systems
何登成的技术博客 ? 个人订阅的10佳博客与相关介绍
Learning Materials of ADA
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服