About Us


We are a team of researchers and engineers from HPC, compiler technologies, arithmetic and hardware design fields. We challenge the status quo of existing computing platforms. We believe that existing compiler technologies on multi-core platforms are not suited for modern high complexity big data processing applications as their technology is based on an evaluative sequential execution model. The physical world is parallel and not sequential. We believe that 90% of the hardware resources should be allocated to the computation and not vice versa.
Designing parallel computing platforms is hard as human mind (engineer’s) mind is not parallel (try thinking at the same time about the Eiffel’s Tower and what you going to do today). In order to achieve our mission, we are designing fully automated compiler technologies that are capable of tacking a sequentially described algorithm in a C like language and transform it into a data aware parallel computing platform. We are capable to perform extremely powerful transformations on the algorithm in order to achieve close to 100% computational resources utilization which translates into 40x better GFlops/W (on existing FPGAs) compared to modern supercomputing multi/many-core approaches. Our technology is completely scalable to thousands of FPGA chips without significant performance penalties.
Our disruptive technology is based on more than 2 decades of R&D at LIP Laboratory at ENS Lyon and Inria and is based on powerful mathematical concepts that represents and optimizes an algorithm in a parallel form. Our technology is capable of accelerating the computational performance of the most complex algorithms like the FD3D, Cholesky etc.
If you have an application with complex algorithms performing computations on big data, please join us in our quest!