Trying to suggest to Lispers a way to omit parentheses is
like trying to suggest to Haskellers a way to omit monads.

Typed Lambda-Terms are a nice thing which produces readable and comparably short terms. Computer Scientists and Logicians know this and therefore use them. Not so for the rest of Mathematics. Still, in Analysis, I mostly see notions like f:U\rightarrow\mathbb{C},f(x)=x^2. As a typed Lambda-Term, one could just write f=(\lambda x^U.x^2)^{U\rightarrow\mathbb{C}}, or just f=\lambda x^U.x^2 (since the image-type is clear anyway). In this case, of course, its neither shorter nor clearer than the notation before.

But in one recent lecture I took we were talking about curves which are homotopic to the one-pointed curve. The one-pointed curve for a Point A had to be defined by f:]0;1[\rightarrow\mathbb{C},f(x)=A\forall x\in]0;1[, which is syntactically wrong because universal quantifiers have to be written in front of the expression they quantify, but at least purely syntactical - there are people who really write "for all" every time. In Lamda-Notation, this would just have been f=(\lambda x^{]0;1[}.A)^{]0;1[\rightarrow\mathbb{C}} with full type-annotations, but in fact we can omit these and just write f=\lambda x.A, even further, we can just use \lambda x.A instead of naming this function. When handling with a lot of functions that are not needed often, inlining can be a very nice utility.

Well, one has to get used to this type of Notation. And one may argue why writing \lambda x.A instead of x\rightarrow A (the main reason could be that lambda notation is already widespread). But I think it would make huge parts of Analysis (and Linear Algebra) much more enjoyable.