# fastops **Repository Path**: ant_code/fastops ## Basic Information - **Project Name**: fastops - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-04-27 - **Last Updated**: 2020-12-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README vector operations library ================================= This small library enables acceleration of bulk calls of certain math functions on AVX and AVX2 hardware. Currently supported operations are exp, log, sigmoid and tanh. The library is designed with extensibility in mind. Optimized helper functions are found in `fastops/core/FastIntrinsics.h` and you are welcome to contribute your own. The library itself implements operations using AVX and AVX2, but will work on any hardware with at least SSE2 support. `fastops/fastops.h` header provides interface for best versions of functions via runtime CPU dispatch. Pre-AVX implementation uses fmath library, which works reasonably well with SSE. All functions are approximate, yet quite precise. Accuracy of each operation is detailed below along with operation description. All implementation architectures (SSE, AVX or AVX2) share same accuracy while increasing performance. Core implementation (`fastops/core/FastIntrinsics.h`) contains versions for fixed-size arrays that produce completely unrolled code for uncompromized preformance. These may slowdown compilation and thus are currently not exposed via high-level dispatched interfaces. Be careful when using these versions: long fixed-size arrays may lead to etxreme code bloat. The regular versions perform on par with these ones if your arrays are larger than 512 bytes. The quote from Mikhail Parakhin, the Yandex CTO and library creator: //In its spirit the library is aimed to aid vectorization of any compute. Just code up the performer class similar to the existing ones and let it fly. Current performers include not only compute, but also memset/memcopy/memmove operations that are always inlined and so work much faster for short arrays. On short strings gain may be as high as 2x.// How to build ================================= The library requires C++ compiler with c++17 support. The library itslef only depends on fmath, which is single-header library. The supporting code - `eval` and `benchmark` programs depend on TCLAP options parsing library. These dependencies are placed into contrib/libs inside root fastops directory. The library is built using cmake and the build precess is simple and straighforward: 1. $ mkdir build install 2. $ cd build 3. $ cmake ../ 4. $ make 5. $ make install Tools ================================= Two tools are provided along the library: * tools/eval - let one check the accuracy of operations under different conditions. * tools/benchmark - compares performance of AVX/AVX2 optimized versions with baseline fmath implementation. Use `--help` for set of supported options. Please note that running these tools on pre-AVX harware makes little sense. * tools/benchmark will refuse to run on it and won't call AVX2 functions on pure AVX hardware. * tools/eval allows selecting instruction set via command line and does not perform any checks. It will just crash if ran on incompatible hardware. Functions ================================= The dispathed interfaces are available via `fastops/fastops.h` header file. There are single and double precision versions for each operation. Template parameters include speed/accuracy and alignment controls. * Speed/accuracy is bool letting you choose faster or more precise version of algorithm. * Alignment control allows to select whether output array is 32byte-aligned or not. The common belief is that unaligned versions may perform slower, but special studies for our functions were not performed. Choose this parameter according to your array alignment: the aligned AVX operations on unaligned data may crash. All the library functionality is directly available via `fastops/core/FastIntrinsics.h` header, but then you should care about hardware compatibility yourself. Tiny AVX and AVX2 hardware detection utility is available via `fastops/core/avx_id.h`. Below we use the following terms: