I would like to start this essay with an epigraph by a probably eastern-european mathematician on this subject, basically agreeing with me, complaining that newer mathematicians are less likely to say the ball is in a line on the table vs the mathese rendition of the same. But, unfortunately, I cannot find it. Preamble: My disdain for mathematics My disdain for mathematics is well known, and I am in the unique position of being the person in my acquaintance with the highest level of mathematical education yet a disdain for mathematics. This is probably mostly due to my unique personality traits. I'm highly intellectual, yet easily bored; particular about knowing the purposes of everything I do, yet disdaining most purposes; and most importantly, I'm smart enough to be good enough at math, but not smart enough for it to be easy for me. Also, I'm very credulous, and at each level of mathematical education I would find the process tedious and pointless, but people would say to me "oh, that isn't REAL math, you do REAL math the next level up", continuously tricking me into continuing. (For anyone tempted along the same route: it turns out there ARE high(ish)-level insights in mathematics, but you still have to deal with all the regular math crap while you're doing them, so it's not worth it unless you already like the math crap.) But I think at least some of the blame goes to the modern practice of mathematics itself. For the rest of this essay, I will refer to "the contemporary practice of high-level mathematics, especially as taught in upper-level college courses (though not strictly limited to this)" with the metonym "math". I think there are several reasons I find math frustrating, and I will deal with the rest of them here to give a broad view of the subject, before I proceed to the main subject of this essay, mathese. 1. Math does indeed include some interesting insights into the world, which makes it tempting to engage with (or, in other cases, the practical results one needs to investigate are held hostage in the edifice of mathematics). 1b. Similarly, basic arithmetic, algebra, and even calculus are quite useful, suggesting that higher math might be useful too. Tempting, tempting... 2. Actually doing math is incredibly tedious, probably because there are so many abstract details to keep track of. Also, the process of computation is slow, and one has to be attentive to make sure the operations are followed correctly. This is why I'm a computer science guy instead of a math guy; I want the computer to run the numbers for me. I've heard about many great mathematicians in the past, and they seem to be fantastic at working out arithmetic quite fast, so I don't think this bothered them as much. Also, some of them seemed to have actively enjoyed the part of this activity I consider tedious. 3. Most math really is pointless. 4. Most of the concepts that seem to enthrall laymen about math, and thus saturate popular math writing, seem to be abstract claims that seem to have a lot more earth-shattering relevancy to everything than they actually do. Infinity is a good example of this. Occasionally someone will try to prove something about how many types of infinity there are, as though that means something. Whereas, in fact, it's mostly an exercise in what odd conclusions you can draw from odd premises. (Answer: arbitrarily, infinitely, many.) The last thing in mathematics that produced interesting consequences and involved infinity was calculus, and the original calculus worked by ignoring that infinity was an ineffable construct according to mathematicians and jolly well effing it anyway. This so disturbed mathematicians that, dissatisfied with merely getting the right answers, they invented the epsilon-delta definition of a limit so they could finally be satisfied with what they were doing, as though they were really doing anything. Well, they were probably using that to figure out a more complicated differential of some kind. Well, nevertheless. Not really a problem with math per se but the temptations of the popular press. 5. Many ye olde philosopher guys thought math was super cool and going to solve all our problems. They were kind of right, in that Newton would eventually invent calculus, Einstein would invent relativity, etc, but they predicted this back in an age where math was underabundant. Now math is plentiful and most of it is useless, but sometimes I read a philosopher who urges me to study math. Again, not a problem of math but of my temptations. Section 1: Introducing mathese and my problems with it In my critique of mathese here I will try to stay away from such trite and trivial observations as "XxΧχ and × are all identical-looking mathematical terms lol" (that's uppercase x, lowercase x, uppercase chi, lowercase chi, and the multiplication cross, by the way), as careful choices of symbols or penmanship can solve that problem. Of course, most people choose not to solve that problem, but anyhow. Math papers and communications are typically written in what I will here call "mathese", a combination of English words and mathematical notation. I will refer to the opposite of mathese as "englishese", as I find that amusing. One could imagine coining "englishese" to mean the particular type of dead fish writing American children learn in their English classes, but in this essay I just mean normal writing. Googling "mathese" reveals that Ohio State University professor of linguistics Carl Pollard has already used this phrase to describe, well, mathese, and he has written a book chapter https://www.asc.ohio-state.edu/pollard.4/680/chapters/mathese.pdf and a presentation https://www.asc.ohio-state.edu/pollard.4/680/slides/mathesesl.pdf about it. I have skimmed these, and they seem to be congruent with my definition of mathese, so I encourage you to read these sources if you want to know more about the subject than I have said or am about to say. I take issue with the common, dare I say naive, assertion that mathematicians communicate in mathese because it is more precise than more natural ways of expressing oneself. In truth, I think there is no difference in precision between a well-constructed mathese sentence and a well-constructed english sentence about math, although I grant that one can easily construct imprecise sentences, in either language. I think this is a naive assertion because I suspect if you asked a working mathematician if mathese was more precise, they would say no. In fact, one of the more surprising facts to a naive about working mathematicians is that mathematicians do not require or even value precision as much as naives assume. There's probably a whole essay in this topic on its own, probably better if written by a more knowledgeable person, but I'll just note an assertion I have no source for: a large number of proofs in high mathematics are incorrect or incomplete; the author produces a correct mathematical result, but their proof—which naives often think of as an incontrovertible train of logic leading to a result—either leaves out steps which other mathematicians are expected to just assume would turn out ok if filled in, or are actually just wrong. When mathematicians are communicating before they write up their results, I am led to believe they are even messier. Anyway, I think mathematicians write mathese because it's what they were taught and now they are used to it and it doesn't bother them. Perhaps the translation to intuitive concepts happens so quickly in their heads that they no longer notice any speed bump. Please note that I am not opposed to notation, abbreviation, or jargon per se. These things can be helpful. "i∈ℤ" is very brief, and such brevity is often good. (I've read old math texts and they're also hilariously bad, because they laboriously have to talk about eg "the root of an equation where the half is the third part of the blah blah blah" when they mean eg sqrt(3)) It just so happens that these conventions are often employed in unnatural and unhelpful ways that hinder understanding rather than aid it. I am also not opposed to close reading of mathematical text. It seems to me that is unavoidable, as close consideration must itself be closely considered to be illuminating. But the problem with mathese as it is commonly written is that it is extremely roundabout, inviting delay and confusion. For instance, for some reason, mathematicians love talking about properties of things as set membership. "i∈ℤ", as it is usually notated, means "i is an integer". But, actually, it means "i is an element of the set of integers". This isn't too bad because you can just look at all the ∈s in a text, clearly derived from the E of "element" and the ⊂ of the (proper) subset symbol, and loudly say "is a" to yourself, but when you have a problem that begins with elements drawn from some sets, proceeds through constraints that are drawn from some sets, and ends by asking if your result is in some other set, you're bound to get a little more confused than you should have to be. I was interested in category theory because it promises to be a "different grounding" for mathematics other than set theory, according to some breathy reports, which naively suggests that mathematicians might use it to talk about math in a more intuitive way. Alas, this is not so. Mathematicians are trapped in the castle of mathese. Another thing mathematicians love doing is giving things arbitrary (usually one-letter) names instead of using pronouns or phrases to refer back to them. They will often write "consider a function f where..." and then refer to f constantly, instead of "the function" or "it". This is very jarring, since no other language works like this. When you have a significant chunk of one or more alphabets in play, you begin to wish the author would refer simply to "the first function" or "the higher-valued function" instead. Erring in the other direction, mathematicians love treating complex constructions as though they were sensible subjects. For instance, sometimes in math you might be asked to reason about something as mixed up as h(g(f(x)), and various statements will be made about h∘g∘f^-1 or similar. Often, if this compound must be considered, it would make more sense to define a new variable to capture this object of consideration. As a computer science guy, I would suggest a multi-letter variable, because I am a heathen. In fact, I would suggest refering to h∘g∘f^-1 as "the inverse combined function", in just so many words. The best of both worlds could perhaps be achieved with the phrase "the inverse combined function h∘g∘f^-1". Also a usual pitfall of mathese: discussion will be made of h∘g∘f^-1, and then, unceremoniously, discussion will change to h∘g∘f. Another pitfall of mathese is the misplacement of constraints. If one needed to imagine an integer between 3 and 5, for example, this would often be written "i∈ℤ∋3 This paper deals with the problem of designing a Turing machine which, when confronted by the number pair (m, r), computes as efficiently as possible a function g(m, r) such that f_m(g(m,r))=r > This paper deals with the problem of designing a second Turing machine which, when given as input the first Turing machine and a target number, computes as efficiently as possible an input value that will cause the partial function to produce the target number. I highlight these passages not because they are exactly the same, but precisely because they are different! The astute and indefatigable reader will note that the first passage discusses designing a turing machine to use a number pair to compute a function such that something or other, and the second passage discusses designing a turing machine to take a turing machine and a number to produce a number. However, the first paragraph relies on various unspecified mathematical abstractions to get around to saying what the second paragraph says, in exactly as vague, yet more roundabout, a way. If there is no definitive way to number turing machines, what's the benefit of saying the second turing machine must take a number, thus implying the reader must suppose a correspondence between numbers and turing machines, instead of saying the second turing machine must take a turing machine, thus implying the reader must suppose a correspondence between numbers and turing machines? (I wanted also to elide the distinction between partial functions and turing machines, but I didn't do that as it would be too radical an adaptation of the text.)