Designing and optimizing computing solutions has become extremely challenging due to ever changing SW/HW stack, an exploding number of available choices and their interactions. Unfortunately, limited understanding of trade-offs, combined with the cost and time-to-market pressures, leads to few design and optimization choices being explored. This often results in over-provisioned (expensive) and under-performing (uncompetitive) products.
Over the past 15 years we have been leading several highly influential research projects on machine-learning based program optimization and run-time adaptation which enabled the world's first machine-learning based compiler and received multiple international awards.
At the same time, we suffered from all the above problems and eventually decided to develop a scientific research methodology, common workflow framework and public repository of knowledge (Collective Knowledge).
Collective Knowledge framework (CK) is a cross-platform open research SDK developed in collaboration with academic and industrial partners to share artifacts as reusable and customizable components with a unified Python JSON API (see open repository of AI artifacts); assemble portable and customizable experimental workflows (such as multi-objective AI/SW/HW autotuning and co-design); automate package installation across diverse hardware and environments; crowdsource and reproduce experiments across diverse platforms from IoT to supercomputers; unify predictive analytics and enable interactive articles. It helps out partners to reinvent computer engineering, enable sustainable and portable research software while adapting to a Cambrian AI/SW/HW chaos, and accelerate AI research.
We successfully used our approach to help Fortune 50 companies and SMEs achieve 2-20x performance increases, 30% energy reductions, 20% code size reductions, and automatic detection of software and hardware bugs for their business-critical use cases.
We now use our novel methodology and CK technology to help our partners initiate and lead groundbreaking, interdisciplinary and collaborative research projects on fair and reproducible benchmarking, optimization and co-design of emerging workloads such as AI across diverse hardware and inputs while dramatically reducing time to market and R&D costs by several orders of magnitude.
We also actively support Artifact Evaluation Initiatives at the leading computer systems conferences (CGO, PPoPP, PACT, RTSS, SC) to validate experimental results from published papers and improve artifact sharing and reuse!