57. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 2/2

Published: June 25, 2024, 7:57 a.m.

b"

Hello and welcome back to the AAIP

\\n


\\n

This is the second part of my interview with Eldar Kurtic and his research on how to optimiz inference of deep neural networks.

\\n


\\n

In the first part of the interview, we focused on sparsity and how high unstructured sparsity can be achieved without loosing model accuracy on CPU's and in part on GPU's.

\\n


\\n

In this second part of the interview, we are going to focus on quantization. Quantization tries to reduce model size by finding ways to represent the model in numeric representations with less precision while retaining model performance. This means that a model that for example has been trained in a standard 32bit floating point representation is during post training quantization converted to a representation that is only using 8 bits. Reducing the model size to one forth.

\\n


\\n

We will discuss how current quantization method can be applied to quantize model weights down to 4 bits while retaining most of the models performance and why doing so with the models activation is much more tricky.

\\n


\\n

Eldar will explain how current GPU architectures, create two different type of bottlenecks. Memory bound and compute bound scenarios. Where in the case of memory bound situations, the model size causes most of the inference time to be spend in transferring model weights. Exactly in these situations, quantization has its biggest impact and reducing the models size can accelerate inference.

\\n


\\n

Enjoy.

\\n


\\n

## AAIP Community

\\n

Join our discord server and ask guest directly or discuss related topics with the community.

\\n

https://discord.gg/5Pj446VKNU

\\n


\\n

### References

\\n

Eldar Kurtic: https://www.linkedin.com/in/eldar-kurti%C4%87-77963b160/

\\n

Neural Magic: https://neuralmagic.com/

\\n

IST Austria Alistarh Group: https://ist.ac.at/en/research/alistarh-group/

"