Intro and NNEF 1.0 Finalization
NNEF, OpenXR, glTF, OpenCL Next, and Educators Program.
SIGGRAPH 2018 is a huge computer graphics expo that occurs in a seemingly random host city around North America. (Asia has a sister event, called SIGGRAPH Asia, which likewise shuffles around.) In the last twenty years, the North American SIGGRAPH seems to like Los Angeles, which hosted the event nine times over that period, but Vancouver won out this year. As you would expect, the maintainers of OpenGL and Vulkan are there, and they have a lot to talk about.
- NNEF 1.0 has been finalized and released!
- The first public demo of OpenXR is available and on the show floor.
- glTF Texture Transmission Extension is being discussed.
- OpenCL Ecosystem Roadmap is being discussed.
- Khronos Educators Program has launched.
I will go through each of these points. Feel free to skip around between the sections that interest you!
NNEF 1.0 Has Been Finalized and Released!
Machine Learning is a big topic these days, but we’re going to focus on a tiny, technical part of it for today. The premise is that you provide an algorithm with a lot of training data, and, by categorizing segments of it as “good” or “bad”, you can later provide it real data as parameters, and see how the algorithm applies the training data to it. This splits the problem between “training” and “inference”.
Training requires a lot of hardware, but inference can be done at runtime by multi-core CPUs, GPUs, or even FPGAs, DSPs, or custom ASICs. The problem that we’re discussing today is getting the data from the trained networks into the inference runtimes. As you can probably guess, it could get messy when you consider how many combinations there could be.
This is where the Khronos Group jumps in with the new Neural Network Exchange Format (NNEF).
In their briefing, they explain NNEF in terms of Facebook’s ONNX format. ONNX is an open-source project that allows data to be portable between frameworks, and it can be used to move training data into inference runtimes. The reason why the Khronos Group created NNEF, however, is because ONNX is quite heavy-weight and yet inflexible in certain ways; it was designed to move data between training networks, not so much from a training network to an inference runtime.
Three areas that NNEF excels over ONNX for training to inference are:
- It’s more strictly defined
- If you’re designing an ASIC, then you’re stuck once you fabricate it.
- It allows flexible precision
- ONNX demands 32-bit floating point values.
- It allows both flat and compound operations
- ONNX requires flat operations
The provisional specification has been out since December, but the finalized format was just released! To illustrate how recent of a decision this was – the vote took place sometime between Monday, August 6th and Wednesday, August 8th.
Check out the press release below or see the rest of the news on Page 2.
Khronos Group Releases Final NNEF 1.0 Standard For Deployment of Trained Neural Networks
The Neural Network Exchange Format 1.0 specification is now available, with adoption underway from Khronos members, supported by an ecosystem of tools.
SIGGRAPH, VANCOUVER – The Khronos™ Group, an open consortium of leading hardware and software companies creating advanced acceleration standards, announces the ratification and the public release of the Neural Network Exchange Format (NNEF™) 1.0. After gathering feedback from the provisional specification, Khronos releases NNEF 1.0 as a stable, flexible, and extendable open standard for hardware manufacturers to reliably enable widespread deployment of neural networks onto edge devices. With this release, an ecosystem of new tools is now also available on GitHub including an NNEF parser and converters from Tensorflow and Caffe. Importers into popular inferencing environments including Android’s Neural Network API are also being developed.
NNEF was created to reduce fragmentation by facilitating the exchange of neural networks among training frameworks and inference engines, increasing the freedom for users to mix and match the inference engines and training frameworks of their choice. The standard, the result of collaboration between industry-leading members of the Khronos Working Group, guarantees the stability that hardware and software companies can rely on for product deployment while maintaining the flexibility required to respond to the needs of rapidly-moving industries.
NNEF 1.0 will allow reliable export from the rapidly growing number of frameworks, including Torch, Chainer, Theano, PyTorch, and MXNet in addition to those for which tools already exist. A complete workflow from training, through optimization, to deployment is possible using NNEF as the standard transfer format between tools. Open source tools are available to allow easy creation of importers into custom and embedded edge-inferencing environments. For example, open source efforts are already underway to produce importers into more general-purpose environments such as NN API and CoreML.
NNEF was designed to accommodate a wide range of use-cases and network types. Its basic format is designed to be both human readable and easy to parse, whereas its extensions allow for human editing of large networks. It is designed to accommodate rapid evolution of frameworks without the need to change the base standard, and provides an extension mechanism to handle specific issues such as custom data formats for trained network weights.
At launch, the standard will be supported by two open source Tensorflow converters both for network descriptions based on protobuf and python code, and as a converter for Caffe. A Caffe2 open-source converter in development by Au-Zone Technologies will be available in Q3 2018. Various tools from member companies Almotive and AMD will be available, and an importer to the Android NN API is under development by a team from National Tsing-Hua University of Taiwan.
“Khronos recognized a growing format compatibility problem for companies wanting to deploy trained neural networks onto edge devices. We set out to build the first standard platform for engineers to easily deploy networks from training frameworks to inference engines. Today, we are proud to release NNEF 1.0 as a stable, open specification that will allow existing hardware to work but continue to evolve through flexible extension mechanisms to keep up with this fast changing field of machine learning,” said Khronos NNEF Working Group Chair Peter McGuinness. “In December 2017 we released the developer preview of NNEF and made an open call for industry feedback. Community response has been tremendous, confirming the demand for this standard and enabling us to achieve a responsive and complete NNEF 1.0 specification.”