GSoC 2015 Project Ideas - STEllAR-GROUP/hpx GitHub Wiki
Table of Contents:
- Create an HPX backend for the ISPC Compiler
- Work on parallel algorithms for HPX
- Optimize the BlueGene/Q port of HPX
- Port HPX to IOS
- Implement a Map/Reduce Framework
- Implement a Plugin Mechanism for Thread schedulers
- Create parcelport based on websockets
- Script language bindings
- All to All Communications
- Distributed Component Placement
- A C++ Runtime Replacement
- A Free Resumable functions implementation
- Add a mechanism to integrate C++AMP with HPXCL
- Coroutine like Interface
- Bug Hunter
- Port the Graph500 Benchmark to HPX
- Port Mantevo Mini App(s) to HPX
- Create an HPX communicator for Trilinos project
- Implement an HPX debugger
- SIMD Wrapper for ARM NEON, Intel AVX512 & KNC
- Application/counter csv files
- Integration of the cuBLAS library within hpxcl
- Project Template
Welcome to the HPX home page for Google Summer of Code (GSoC). This page provides information about student projects, proposal submission templates, advice on writing good proposals, and links to information on getting started writing with HPX. This page is also used to collect project ideas for the Google Summer of Code 2015. The STE||AR Group will apply as an organization and our goal is to get at least two students funded.
We are looking to fund work on a number of different kinds of proposals (for more details about concrete project ideas, see below):
- extensions to existing library features,
- new distributed data structures and algorithms, and
- multiple competing proposals for the same project.
Students must submit a proposal. A template for the proposal can be found here. Hints for writing a good proposal can be found here.
We strongly suggest that students interested in developing a proposal for HPX discuss their ideas on the mailing list in order to help refine the requirements and goals. Students who actively discuss projects on the mailing list are also ranked before those that do not.
If the descriptions of these projects seem a little vague... Well, that's intentional. We are looking for students to develop requirements for their proposals by doing initial background research on the topic, and interacting with the community on the HPX mailing list ([email protected]) to help identify expectations.
- Create an HPX backend for the ISPC Compiler
- Work on parallel algorithms for HPX
- Optimize the BlueGene/Q port of HPX
- Port HPX to IOS
- Implement a Map/Reduce Framework
- Implement a Plugin Mechanism for Thread schedulers
- Create parcelport based on websockets
- Script language bindings
- All to All Communications
- Distributed Component Placement
- A C++ Runtime Replacement
- A Free Resumable functions implementation
- Add a mechanism to integrate C++AMP with HPXCL
- Coroutine like Interface
- Bug Hunter
- Port the Graph500 Benchmark to HPX
- Port Mantevo Mini App(s) to HPX
- Create an HPX communicator for Trilinos project
- Implement an HPX debugger
- SIMD Wrapper for ARM NEON, Intel AVX512 & KNC
- Application/counter csv files
- Integration of the cuBLAS library within hpxcl
- Project Template
- Abstract: The Intel ISPC (SPMD) compiler is a compiler for a variant of the C programming language with extensions for "single program, multiple data" (SPMD) programming. The language follows a similar programming model than CUDA or OpenCL but is solely targeted at SIMD capable CPUs. It uses clang as a frontend and therefore LLVM as the backend to generate code. One important feature of the language is the availability to spawn asynchronous tasks. Those asynchronous tasks align very well with the HPX programming model. Fortunately, the API that ISPC uses to invoke new asynchronous tasks is well documented (more information here). To leverage the possiblities provided by this compiler, the purpose of this project is to provide an implementation of that API using HPX functionalities. Moreover, we'd be interested in utilizing HPX's capabilities for distributed applications.
- Difficulty: Easy-Medium
- Expected result: The minimal expectations for that project is to have a functional HPX backend for ISPC. Benchmark results need to be presented showing the impact of the proposed backend.
- Knowledge Prerequisite: C++
- Mentor: Thomas Heller (thom.heller at gmail.com) and Hartmut Kaiser (hartmut.kaiser at gmail.com)
-
Abstract: N4310 is a C++ standardization proposal which is very likely to be accepted as a TS (technical specification) in 2015. It provides an abstraction of C++ parallel algorithms well aligned with the existing STL algorithms. HPX sorely misses a parallel and distributed algorithm module. This project should implement such a module on top of HPX. Some work in this direction has already been finished (see #1141 for our progress). While there is still some work to be done for #1141, we're especially interested in extending those parallel algorithms to work with distributed data structures, like
hpx::vector
. Some minimal work has been done in this direction as well (see #1338) but we look for interested students to continue implementing the distributed parallel algorithms. - Difficulty: Medium-Hard
- Expected result: Implementations of various parallel and distributed algorithms for HPX
- Knowledge Prerequisite: C++, STL
- Mentor: Hartmut Kaiser (hartmut.kaiser at gmail.com) and Thomas Heller (thom.heller at gmail.com)
- Abstract: The BlueGene/Q (BG/Q) is a supercomputer architecture from IBM which is equipped with an embedded PowerPC CPU (SoC). One of the key features of this architecture is a in-order CPU Design with a total of 64 hardware threads providing fast context switches between threads. In addition, IBM equipped the BG/Q with a network interface on chip which comes with an Active Message library (PAMI). Active Messages and one sided communication are one of the key concepts within HPX. Fast context switches and a networking layer taylored towards the needs of HPX lead to the fact that this system is a perfect match for HPX. In order to fully utilize such a machine, a fast user level context switching as well as a parcelport based on the PAMI library need to be provided. Access to a BG/Q will be provided.
- Difficulty: Medium-Hard
- Expected result: Provide benchmark results which show the benefits of the performance optimizations
- Knowledge Prerequisite: C++, Assembly (preferrably experience with the PowerPC architecture)
- Mentor: Thomas Heller (thom.heller at gmail.com) and Hartmut Kaiser (hartmut.kaiser at gmail.com)
- Abstract: HPX has already proven to run efficiently on ARM based systems. This has been demonstrated with an application written for Android tablet devices. A port to handheld devices running with IOS would be the next logical steps! In order to be able to run HPX efficiently on there, we need to adapt our build system to be able to cross compile for IOS and add a code to interface with the IOS GUI and other system services.
- Difficulty: Easy-Medium
- Expected result: Provide a prototype HPX application running on an iPhone or iPad
- Knowledge Prerequisite: C++, ObjectiveC, IOS
- Mentor: Hartmut Kaiser (hartmut.kaiser at gmail.com) and Thomas Heller (thom.heller at gmail.com)
- Abstract: Map/Reduce frameworks are getting more and more popular for big data processing (for example Hadoop). By utilizing the unified and standards conforming API of the HPX runtime system, we believe to be able to perfectly repesent the Map/Reduce programming model. Many applications would benefit from direct support in HPX. This might include adding Hypertable or similar libraries to the mix to handle the large data sets Map/Reduce is usually used with.
- Difficulty: Medium-Hard
- Expected result: A propotypical implementation running on an order of 1000 compute nodes
- Knowledge Prerequisite: C++
- Mentor: Hartmut Kaiser (hartmut.kaiser at gmail.com), Andreas Schäfer (andreas.schaefer at fau.de), and Dylan Stark (dstark at sandia.gov)
- Abstract: Revise the thread scheduling subsystem of HPX such that it is more pluggable and allows for more fine grain control over what scheduler to use for the execution of a particular section of the user code. The proposal to the C++ Standards committee proposing work executors (see N3562) seems to provide a good starting point for a possible interface. Also, some initial work has been done already. However, more work needs to be applied to have everything working in an acceptable manner. One of the big advantages of executors is to express locality of where tasks are supposed to run. Examples for this are interactions with GUI libraries like Qt and proper NUMA memory placement.
- Difficulty: Easy-Medium
- Expected result: All existing schedulers need to converted to the plugin system and at least one example code needs to show the advantages of executors.
- Knowledge Prerequisite: C++
- Mentor: Hartmut Kaiser (hartmut.kaiser at gmail.com) and Patricia Grubel (pagrubel at nmsu.edu)
- Abstract: Create a new parcelport which is based on websockets. The Websockets++ library seems to be a perfect starting point to avoid having to dig into the websocket protocol too deeply.
- Difficulty: Medium-Hard
- Expected result: A proof of concept parcelport based on websockets with benchmark results
- Knowledge Prerequisite: C++, knowing websockets is a plus
- Mentor: Hartmut Kaiser (hartmut.kaiser at gmail.com) and Thomas Heller (thom.heller at gmail.com)
- Abstract: Design and implement Python bindings for HPX exposing all or parts of the HPX functionality with a 'Pythonic' API. This should be possible as Python has a much more dynamic type system than C++. Using Boost.Python seems to be a good choice for this. A similar thing could be done for Lua. We'd suggest to base the Lua bindings on LuaBind, which is very similar to Boost.Python.
- Difficulty: Medium
- Expected result: Demonstrate functioning bindings by implementing small example scripts for different simple use cases
- Knowledge Prerequisite: C++, Python or Lua
- Mentor: Hartmut Kaiser (hartmut.kaiser at gmail.com) and Adrian Serio (aserio at cct.lsu.edu)
- Abstract: Design and implement efficient all-to-all communication LCOs. While MPI provides mechanisms for broadcasting, scattering and gathering with all MPI processes inside a communicator, HPX currently misses this feature. It should be possible to exploit the Active Global Address Space to mimic global all-to-all communications without the need to actually communicate with every participating locality. Different strategies should be implemented and tested. A first and very basic implementation of broadcast already exisits which tries to tackle the above described problem, however, more strategies to granularity control and locality exploitation need to be investigated an implemented. We also have a first version of a gather utility implemented.
- Difficulty: Medium-Hard
- Expected result: Implement benchmarks and provide performance results for the implemented algorithms
- Knowledge Prerequisite: C++
- Mentor: Thomas Heller (thom.heller at gmail.com) and Andreas Schäfer (andreas.schaefer at fau.de)
- Abstract: Implement a EDSL to specify the placement policies for components. This could be done similar to [Chapels Domain Maps] (http://chapel.cray.com/tutorials/SC12/SC12-6-DomainMaps.pdf). In Addition, allocators can be built on top of those domain maps to use with C++ standard library containers. This is one of the key features to allow users to efficiently write parallel algorithms without having them worried to much about the initial placement of their distributed objects in the Global Address space
- Difficulty: Medium-Hard
- Expected result: Provide at least one policy which automatically creates components in the global address space
- Knowledge Prerequisite: C++
- Mentor: Thomas Heller (thom.heller at gmail.com) and Hartmut Kaiser (hartmut.kaiser at gmail.com)
- Abstract: Turn HPX into a replacement for the C++ runtime. We currently need to manually "lift" regular functions to HPX threads in order to have all the information for user-level threading available. This project should research the steps that need to be taken to implement a HPX C++ runtime replacement and provide a first proof of concept implementation for a platform of choice.
- Difficulty: Easy-Medium
- Expected result: A proof of concept implementation and documentation on how to run HPX application without the need of an hpx_main
- Knowledge Prerequisite: C++, Dynamic Linker, Your favorite OSes ABI to start programs/link executables
- Mentor: Hartmut Kaiser (hartmut.kaiser at gmail.com) and Thomas Heller (thom.heller at gmail.com)
-
Abstract: Implement resumable functions either in g++ or clang. This should be based on the corresponding proposal to the C++ standardization committee (see N4286. While this is not a project which directly related HPX, having resumable functions available and integrated with
hpx::future
would allow to improve the performance and readability of asynchronous code. This project sounds to be huge - but it actually shouldn't be too difficult to realize. - Difficulty: Medium-Hard
-
Expected result: Demonstrating the
await
functionality with appropriate tests - Knowledge Prerequisite: C++, knowledge of how to extend clang or gcc is clearly advantageous
- Mentor: Agustín Bergé (k at fusionfenix.com) and Hartmut Kaiser (hartmut.kaiser at gmail.com)
- Abstract: The HPXCL project strives to build an infrastructure integrating GPGPUs (CUDA and OpenCL based) into HPX by allowing to manage those tasks transparently together with 'normal' HPX thread asynchrony. We would like to do the same (or similar) with C++AMP. There are two implementatons of C++AMP itself available, the original Microsoft implementation in Visual C++ and the other one supported by the HSA Foundation (as announced here)
- Difficulty: medium-hard
- Expected result: Demonstrating a nicely integrated C++AMP kernel with a simple HPX program
- Knowledge Prerequisite: C++, C++AMP, GPGPUs (CUDA or OpenCL experience might be helpful)
- Mentor: Agustín Bergé (k at fusionfenix.com) and Thomas Heller (thom.heller at gmail.com)
- Abstract: HPX is an excellent runtime system for doing task based parallelism. In its current form however results of tasks can only be expressed in terms of returning from a function. However, there are scenarios where this is not sufficient. One example would be lazy ranges of integers (For example fibonacci, 0 to n, etc.). For those a generator/yield construct would be perfect!
- Difficulty: Easy-Medium
- Expected result: Implement yield and demonstrate on at least one example
- Knowledge Prerequisite: C++
- Mentor: Hartmut Kaiser (hartmut.kaiser at gmail.com) and Thomas Heller (thom.heller at gmail.com)
- Abstract: In addition to our extensive ideas list, there are several active tickets listed in our issue tracker which are worth tackling as a separate project. Feel free to talk to us if you find something which is interesting to you. A prospective student should pick at least one ticket with medium to hard difficulty and discuss how it could be solved
- Difficulty: Medium-Hard
- Expected result: The selected issues need to be fixed
- Knowledge Prerequisite: C++
- Mentor: and Thomas Heller (thom.heller at gmail.com)
- Abstract: Implement Graph500 using the HPX Runtime System. Graph500 is the benchmark used by HPC industry to model important factors of many modern parallel analytical workloads. The Graph500 list is a performance list of systems using the benchmark and was designed to augment the Top 500 list. The current Graph500 benchmarks are implemented using OpenMP and MPI. HPX is well suited for the fine-grain and irregular workloads of graph applications. Porting Graph500 to HPX would require replacing the inherent barrier synchronization with asynchronous communications of HPX, producing a new benchmark for the HPC community as well as an addition to the HPX benchmark suite. See http://www.graph500.org/ for information on the present Graph500 implementations.
- Difficulty: Medium
- Expected result: New implementation of the Graph500 benchmark.
- Knowledge Prerequisite: C++
- Mentor: Patricia Grubel (pagrubel at nmsu.edu), Thomas Heller (thom.heller at gmail.com), and Dylan Stark (dstark at sandia.gov)
- Abstract: Implement a version of one or more mini apps from the Mantevo project (http://mantevo.org/) using HPX Runtime System. We are interested in mini applications ported to HPX that have irregular workloads. Some of these are under development and we will have access to them in addition to those listed on the site. On the site, MiniFE and phdMESH would be a good additions to include in HPX benchmark suites. Porting the mini apps would require porting the apps from C to C++ and replacing the inherent barrier sycnhronization with HPX's asynchronous communication. This project would be a great addition to the HPX benchmark suite and the HPC community.
- Difficulty: Medium
- Expected result: New implementation of a Mantevo mini app or apps.
- Knowledge Prerequisite: C, C++
- Mentor: Patricia Grubel (pagrubel at nmsu.edu), Thomas Heller (thom.heller at gmail.com), and Dylan Stark (dstark at sandia.gov)
- Abstract: The trilinos project (http://trilinos.org/) consists of many libraries for HPC applications in several capability areas (http://trilinos.org/capability-areas/). Communication between parallel processes is handled by an abstract communication API (http://trilinos.org/docs/dev/packages/teuchos/doc/html/index.html#TeuchosComm_src) which currently has implementations for MPI and serial only. Extending the implementation with an HPX backend would permit any of the Teuchos enabled Trilinos libraries to run in parallel using HPX in place of MPI. Of particular interest is the mesh partitioning library Zoltan2 (http://trilinos.org/packages/zoltan2/) which would be used as a test case for the new communications interface. Note that some new collective HPX algorithms may be required to fulfill the API requirements (see all-to-all-communications project above).
- Difficulty: Medium-Hard
- Expected result: A demo application for partitioning meshes using HPX and Zoltan.
- Knowledge Prerequisite: C, C++, (MPI)
- Mentor: John Biddiscombe (biddisco at cscs.ch) and Thomas Heller (thom.heller at gmail.com)
- Abstract: It is currently unreasonably hard to debug large scale HPX applications. This is mainly due to the fact that debuggers don't understand user-level threads. In addition, extracting useful information out of the current runtime state proves to be incredibly hard with gdb or lldb. In order to remedy this shortcoming, HPX users need to have the ability to 1) easily attach a debugger to a running large scale application on a supercomputer and 2) Implement pretty printers and intrinsic runtime capabilities to help debugging running HPX applications. A first implementation of this is available at (https://github.com/STEllAR-GROUP/hpx/tree/master/tools/gdb)
- Difficulty: Easy-Medium
- Expected result: Major improvement of the debugging experience of HPX applications.
- Knowledge Prerequisite: C++, Python, gdb
- Mentor: Thomas Heller (thom.heller at gmail.com) and Zach Byerly (zbyerly at gmail.com)
-
Abstract: Vectorization is imperative for writing highly efficient numerical kernels. The goal this project is to extend the already existing SIMD wrappers in LibFlatArray ( https://github.com/STEllAR-GROUP/libflatarray/blob/master/src/short_vec.hpp ) to further architectures (e.g. ARM NEON, Intel AVX512, Intel IMCI, CUDA etc.) and/or to extend the capabilities of these wrappers.
-
Difficulty: Easy-Medium
-
Expected result: Implementation of short_vec class for ARM NEON and Intel AVX512 and IMCI
-
Knowledge Prerequisite: C++, SSE/AVX
-
Mentor: Andreas Schäfer (andreas.schaefer at fau.de)
- Abstract: Create csv files with user defined parameters and hpx-counters, perform statistics and plot capabilites.
- Difficulty: Easy
- Expected result: Changes to hpx that would add the user define parameters to the counter destination file. Python scripts that would process samples give simple plots of any choosen counter or parameter. Add user defined parameters to the counter destination file. This could include input parameters as well as output such as Execution Time, or other pertinent output from the application, or the runtime (ie, number of threads etc.). From this create a counter destination .csv file with header that contains small counter labels such that results of multiple samples with multiple input parameters can be logged with counters. Then write some python/pandas or R to do statistical processing and/or plots. Database ideas welcome.
- Knowledge Prerequisite: familiarity and willingness to work with C++, python and pandas
- Mentor: Patricia Grubel (pagrubel at nmsu.edu), Hartmut Kaiser (hartmut.kaiser at gmail.com)
- Abstract: The cuBLAS library is the GPU accelerated version of Basic Linear Algebra Subroutines like solving linear equation systems. This tasks are of high importance in different applications for numerical simulations. During this project the student has the possibility to obtain insights in the basic linear algebra and the integration of acceleration cards to utilize the full capability of modern super computers. The main task of the project is the wrapping of the cuBLAS subroutines inside hpx actions to integrate them in the asynchronous execution graph of hpx. Some research is invested in the applicability of the existing data structures of cuBLAS to the serialization part. Depending on this result additional wrapping functionality for the data structures need to be provided.
- Difficulty: Easy-Medium
- Expected result: Tight integration of the cuBlas functionality into hpx::future for the asynchronous integration in the hpx execution graph
- Knowledge Prerequisite: C++ and CUDA
- Mentor: Patrick Diehl (diehl at ins.uni-bonn.de)
- Abstract:
- Difficulty:
- Expected result:
- Knowledge Prerequisite:
- Mentor: