- Details
OpenGL pipeline with Map Reduce and MPI
Hubble in a bottle! was a scientific visualization tool for TIPSY files with plenty of optimizations (3dnow for AMD and SSE for Intel processors) and the ability to run on large supercomputer clusters using the MPI library. The initial releases were developed by Tiziano in 2003 including a complete graphics pipeline and an initial parallel version with MPI. Thomas Kühne did impressive improvements both from theoretical and practical point of view, published his diploma thesis, and brought the code to production stage in 2005 at the Institute for Theoretical Physics, University of Zürich, on the zBox supercomputer built by Joachim Stadel. Thomas now works as professor of Theoretical Chemistry at Paderborn.
Hubble in a bottle utility is now disminished by the latest advancements in General Purpose Graphic Cards (GPGPUs). Click here to visit the project site on Sourceforge.
Hubble in a Bottle's model can be rotated intuitively using quaternions and in real time with the quick mode. Three plot modes are available: Maximum density, Nearest particle and Line of Sight Approximation, if the density file is missing. There are ten color maps and an edge detection filter to evidenciate contours. The project is Open Source and under the GNU General Public License.
Finite Elements Methods with Trilinos framework
In the winter semester of 2004 Tiziano worked together with other software developers on porting the FEMAXX code by Roman Geus from Python to the C++ Trilinos Framework by Sandia National Labs. The group then published the scientific paper called On a parallel multilevel preconditioned Maxwell eigensolver. While copying the routines from Epetra_CrsMatrix.cpp to LocalCrsMatrix.h to do some tests, Tiziano noticed a blocking bug in Trilinos: the multivector transpose solver method for lower triangular matrix was incorrect. A simple patch was figured out and sent to the Trilinos team. Bug was then consolidated and fixed here.
ETH Work Attestation
Introduction to Parallel Computing
A good introduction to Parallel Computing is the book by professor Wesley P. Petersen:
- Details
Simple Neural Networks were already available in the '80s of the last century. The new developments of the last decade both in backpropagation algorithms to train larger networks and in hardware with the advent of the General Purpose Graphics Processing Units (GPGPUs) made the operation of bigger networks possible. These bigger networks are foundation of the Deep Learning research field.
The Computational Cluster of deep space computing AG is ideally suited to train and operate Deep Learning Neural Networks.
Here you find the presentation "Einführung ins Deep Learning", which we presented at the Asset Optimization Day 2019 in Bern.
Page 3 of 3