Tensors are kind of like the wild west of applied math these days. They are a brave new territory and somewhat dangerous with regards to procedures one might borrow from matrices. In particular, “Most tensor problems are NP-Hard”. Despite this pessimistic result however, people are starting to utilize the additional structure available in tensor decompositions for great gains in machine learning, and in many-particle quantum mechanics and quantum computation, they are almost unavoidable. A recent post to the arXiv showed that the power of deep neural networks can be understood in terms of the expressiveness of tensor decompositions. The paper is titled “On the Expressive Power of Deep Learning: A Tensor Analysis”, and in my opinion it is a paper full of great insights and tools from measure theory to aid in the way we think about both neural networks and general tensors. Well worth a read if these topics interest you.