Transformer models have achieved remarkable results in a wide range of applications. However, their scalability is hampered by the quadratic time and memory complexity of the self-attention mechanism concerning the sequence length.
Articles
Related Articles
June 21, 2023
FACIAL EXPRESSION RE-TARGETING FROM A SINGLE CHARACTER
Video retargeting for digital face animation is used in virtual reality, social media, gaming, movies, and...
Read More >
1 MIN READING
June 26, 2023
Assessment of a Flow-dependent Subgrid Characteristic Length for Large-Eddy Simulation on Anisotropic Grids
This paper presents the latest results of a long track development activity in the context of...
Read More >
1 MIN READING
October 26, 2017
Trapezoidal block split using orthogonal C2 transforms for HEVC video coding
We present an extension for HEVC intra-frame coding with trapezoidal splits and orthogonal transforms. A block...
Read More >
1 MIN READING