MLIR, not a language like Python or C++, signifies an intermediate compilation stage between the machine code and higher-level languages
Google’s engineers, who are working on the TensorFlow machine learning (ML) framework, have launched a subproject, Multi-Level Intermediate Representation (MLIR), that is proposed to be a common intermediate language for all the ML frameworks.
The MLIR is an open source project and a certified specification is available for those who want to create MLIR.
MLIR will enable projects that are utilising TensorFlow and other libraries of ML to be collected to more efficient code. This takes the maximum advantage of original hardware. In addition, MLIR can be generally used by compilers, extending its optimisation advantages beyond the ML projects.
MLIR, not a language like Python or C++, signifies an intermediate compilation stage between the machine code and higher-level languages. The LLVM compiler infrastructure project uses intermediate representation of its own. MLIR is co-created by Chris Lattner, one of the LLVM’s originators.
Offering single, standard IRs
Earlier this month, Lattner and Googler Tatiana Shpeisman described how the TensorFlow is already creating multiple intermediate representations internally. However, these disparate intermediate representations do not get benefited from each another. MLIR offers a single, standard intermediate representation for all the subsystems of TensorFlow. Currently, TensorFlow is migrating to internally utilise MLIR.
In addition, MLIR can offer parallelised compilation. The subproject is developed to enable a compiler to work on diverse segments of code in parallel. This will allow ML models and other such applications to be more quickly pushed to production.
Apart from ML, MLIR can also offer other benefits to frameworks and languages.