Posters promoting the presentation of the Bachelor’s thesis, “Distributed Computing Package for PyTorch Framework.” The static version is designed for print while the animated one for social media.​​​​​​ PyTorch is a framework for developers and researchers that offers a way to train neural networks. Therefore, the abstracted shapes portray the visual representations of these networks—circles and rectangles represent neurons and links between them. PyTorch Distributed is an extension that enables the distribution of the workload across a cluster of machines. Each machine is responsible for a small part of the workload, which is then synchronized to produce the final result. To convey this idea, circular shapes depict different machines working on a shared goal while rectangles represent communicating channels that enable the exchange of data between the machines.
Abstract → The rapid development of machine learning reached the point where researchers and developers start to be limited by resources of a single machine and desire to utilize whole clusters. To address this issue, the authors have created a distributed computing package for the PyTorch framework. The package aims for extensibility and attempts to strike the balance between performance and the ease of use. To accomplish that, it can operate in two modes. The first one adopts an interface identical to that of the PyTorch CUDA package, achieving a gentle learning curve, but sacrificing scalability. The other mode is based on MPI and is designed for experienced engineers, who wish to develop custom solutions to achieve a peak performance. Core logic is implemented as a C++ library and is exposed to Python through a thin wrapper around its C interface. It has no external dependencies, which makes it easy to build in a variety of environments while a modular architecture allows users to swap and adjust functionalities to their needs.

↳ PyTorch 0.2.0 Release
↳ PyTorch Homepage

Check out these projects