Математические нотации: Прошлое и будущее (2000)
Большинство математических обозначений существуют уже более пятисот
лет. Я рассмотрю, как они разрабатывались, что было в античные и
средневековые времена, какие обозначения вводили Лейбниц, Эйлер,
Пеано и другие, как они получили распространение в 19 и 20 веках.
Будет рассмотрен вопрос о схожести математических обозначений с тем,
что объединяет обычные человеческие языки. Я расскажу об основных
принципах, которые были обнаружены для обычных человеческих языков,
какие из них применяются в математических обозначениях и какие нет.
|
Mathematical Notation: Past and Future (2000)
Most mathematical notation now in use is
between one and five hundred years old. I will review how it
developed, with precursors in antiquity and the Middle Ages,
through its definition at the hands of Leibniz, Euler, Peano and
others, to its widespread use in the nineteenth and twentieth
centuries. I will discuss the extent to which mathematical
notation is like ordinary human language—albeit international in
scope. I will show that some general principles that have been
discovered for ordinary human language and its history apply to
mathematical notation, while others do not.
Ordinary language involves strings of text; mathematical
notation often also involves two-dimensional structures. I will
discuss how mathematical notation might make use of more general
structures, and whether human cognitive abilities would be up to
such things.
|
И основной способ дать людям взаимодействовать с чем-то столь
сложным — использовать что-то вроде языка. |
And basically the way that people seem to communicate anything sophisticated that they want to communicate is by using some kind of language. Normally, languages arise through some sort of gradual historical process of consensus. But computer languages have historically been different. The good ones tend to get invented pretty much all at once, normally by just one person. |
В основном мы отталкивались от английского языка, так как имена этих
фрагментов основаны на простых английских словах. То есть это
значит, что человек, который просто знает английский, уже сможет
кое-что понять из написанного в Mathematica. |
In a sense, we were leveraging on English in a crucial way in doing this because the names we used for those chunks were based on ordinary English words. And that meant that just by knowing English, one could get at least somewhere in understanding something written in Mathematica. But of course the Mathematica language isn't English—it's a tremendously stylized fragment of English, optimized for explaining computations to Mathematica. One might think that perhaps it might be nice if one could just talk full English toMathematica. After all, we already know English so we wouldn't have to learn anything new to talk to Mathematica. But I think there are some good reasons why we're better off thinking at a basic level inMathematica than in English when we think about the kinds of computations thatMathematica does. But quite independent of this we all know that having computers understand full natural language has turned out to be very hard.
|
Я думаю, математическая нотация — весьма интересное поле
исследования для лингвистики. |
And, in fact, I think mathematical notation is a pretty interesting example for the field of linguistics. You see, what's mostly been studied in linguistics has actually been spoken languages. Even things like punctuation marks have barely been looked at. And as so far as I know, no serious linguistic study of mathematical notation has been made at all in the past. Normally in linguistics there are several big directions that one can take. One can see how languages evolved historically. One can see what happens when individual people learn to speak languages. And one can try to make empirical models of the structures that languages end up having.
|
И если заглянуть в прошлое, то можно увидеть, что есть три основные
направления, из которых появилась математика в том виде, в котором
мы сейчас её знаем — это арифметика, геометрия и логика. |
And if one traces things back, there seem to be three basic traditions from which essentially all of mathematics as we know it emerged: arithmetic, geometry, and logic. Arithmetic comes from Babylonian times, geometry perhaps from then but certainly from Egyptian times, and logic from Greek times. And what we'll see is that the development of mathematical notation—the language of mathematics—had a lot to do with the interplay of these traditions, particularly arithmetic and logic.
|
Арифметика, вероятно, возникла из нужд торговли, для таких вещей, как, к примеру, счёт денег, а затем арифметику подхватили астрология и астрономия. Геометрия, по всей видимости, возникла из землемерческих и подобных задач. А логика, как известно, родилась из попытки систематизировать аргументы, приведённые на естественном языке. |
Arithmetic presumably came from commerce and from doing things like counting money, though it later got pulled into astrology and astronomy. Geometry presumably came from land surveying and things like that. And logic we pretty much know came from trying to codify arguments made in natural language. |
И, фактически, один из самых важных вопросов относительно чисел,
который, как я полагаю, будет всплывать ещё много раз — насколько
сильным должно быть соответствие между обычным естественным языком и
языком математики? |
And actually one of the biggest issues with numbers, I suspect, had to do with a theme that'll show up many more times: just how much correspondence should there really be between ordinary natural language and mathematical language? Here's the issue: it has to do with reusing digits, and with the idea of positional notation. You see, in natural language one tends to have a word for "ten", another word for "a hundred", another for "a thousand", "a million" and so on. But we know that mathematically we can represent ten as "one zero" (10), a hundred as "one zero zero" (100), a thousand as "one zero zero zero" (1000), and so on. We can reuse that 1 digit, and make it mean different things depending on what position it appears in the number. Well, this is a tricky idea, and it took thousands of years for people generally to really grock it. And their failure to grock it had big effects on the notation that they used, both for numbers and other things. |
Вообще, эта неспособность увидеть возможность вводить имена для
числовых переменных есть интересный случай, когда языки или
обозначения ограничивают наше мышление. Это то, что несомненно
обсуждается в обычной лингвистике. В наиболее распространённой
формулировке эта идея звучит как гипотеза
Сепира-Уорфа (гипотеза
лингвистической относительности). |
Generally, this failure to see that one could name numerical variables is sort of an interesting case of the language or notation one uses preventing a certain kind of thinking. That's something that's certainly discussed in ordinary linguistics. In its popular versions, it's often called the Sapir-Whorf hypothesis. And of course those of us who've spent some part of our lives designing computer languages definitely care about this phenomenon. I mean, I certainly know that if I think in Mathematica, there are concepts that are easy for me to understand in that language, and I'm quite sure they wouldn't be if I wasn't operating in that language structure. |
Ньютон не особо интересовался обозначениями.
|
Newton was not a great notation enthusiast. But Leibniz was a different story. Leibniz was an extremely serious notation buff. Actually, he thought that having the right notation was somehow the secret to a lot of issues of human affairs in general. He made his living as a kind of diplomat-analyst, shuttling between various countries, with all their different languages, and so on. And he had the idea that if one could only create a universal logical language, then everyone would be able to understand each other, and figure out anything. There were other people who had thought about similar kinds of things, mostly from the point of view of ordinary human language and of logic. A typical one was a rather peculiar character called Ramon Lull, who lived around 1300, and who claimed to have developed various kinds of logic wheels—that were like circular slide rules—that would figure out answers to arbitrary problems about the world. |
Вообще говоря, в 80-х годах 19 века Пеано разработал то, что очень
близко к обозначениям, которые используются в большинстве
современных теоретико-множественных концепций. |
Actually, in the 1880s Peano ended up inventing things that are pretty close to the standard notations we use for most of the set-theoretical concepts. But,a little like Leibniz, he wasn't satisfied with just inventing a universal notation for math. He wanted to have a universal language for everything. So he came up with what he called Interlingua, which was a language based on simplified Latin. And he ended up writing a kind of summary of mathematics—called Formulario Mathematico—which was based on his notation for formulas, and written in this derivative of Latin that he called Interlingua. Interlingua, like Esperanto—which came along about the same time—didn't catch on as an actual human language. But Peano's notation eventually did. At first nobody much knew about it. But then Whitehead and Russell wrote their Principia Mathematica, and in it they used Peano's notation. I think Whitehead and Russell probably win the prize for the most notation-intensive non-machine-generated piece of work that's ever been done. |
Есть один момент, который постоянно проявляется в этой области — нотация, как и обычные языки, сильно разделяет людей. Я имею в виду, что между теми, кто понимает конкретные обозначения, и теми, кто не понимает, имеется большой барьер. Это кажется довольно мистическим, напоминая ситуацию с алхимиками и оккультистами — математическая нотация полна знаков и символов, которые люди в обычной жизни не используют, и большинство людей их не понимают. |
There's one thing that comes through in a lot of this history, by the way: notation, like ordinary language, is a dramatic divider of people. I mean, there's somehow a big division between those who read a particular notation and those who don't. It ends up seeming rather mystical. It's like the alchemists and the occultists: mathematical notation is full of signs and symbols that people don't normally use, and that most people don't understand. |
КомпьютерыВот вопрос: можно ли сделать так, чтобы компьютеры понимали эти обозначения?
Грамматика обычных разговорных языков развивалась веками. Без
сомнения, многие римские и греческие философы и ораторы уделяли ей
много внимания. И, по сути, уже примерно в 500 года до н. э. Панини
удивительно подробно и ясно расписал грамматику для санскрита.
Фактически, грамматика Панини была удивительно похожа по структуре
на спецификацию правил создания компьютерных языков в форме
Бэкуса-Наура,
которая используется в настоящее время. |
Computers But now the question is: can computers be set up to understand that notation? For ordinary human language, people have been making grammars for ages. Certainly lots of Greek and Roman philosophers and orators talked about them a lot. And in fact, already from around 500 BC, there's a remarkably clean grammar for Sanskrit written by a person called Panini. In fact, Panini's grammar is set up remarkably like the kind of BNF specifications of production rules that we use now for computer languages. And not only have there been grammars for language; in the last centuries or so, there have been endless scholarly works on proper language usage and so on. But despite all this activity about ordinary language, essentially absolutely nothing has been done for mathematical language and mathematical notation. It's really quite strange. There have even been mathematicians who've worked on grammars for ordinary language. An early example was John Wallis—who made up Wallis' product formula for pi—who wrote a grammar for English in 1658. Wallis was also the character who started the whole fuss about when one should use "will" and when one should use "shall." In the early 1900s mathematical logicians talked quite a bit about different layers in well-formed mathematical expressions: variables inside functions inside predicates inside functions inside connectives inside quantifiers. But not really about what this meant for the notation for the expressions. Things got a little more definite in the 1950s, when Chomsky and Backus, essentially independently, invented the idea of context-free languages. The idea came out of work on production systems in mathematical logic, particularly by Emil Post in the 1920s. But, curiously, both Chomsky and Backus came up with the same basic idea in the 1950s. Backus applied it to computer languages: first Fortran, then ALGOL. And he certainly noticed that algebraic expressions could be represented by context-free grammars. Chomsky applied the idea to ordinary human language. And he pointed out that to some approximation ordinary human languages can be represented by context-free grammars too. Of course, linguists—including Chomsky—have spent years showing how much that isn't really true. But the thing that I always find remarkable, and scientifically the most important, is that to a first approximation it is true that ordinary human languages are context-free. So Chomsky studied ordinary language, and Backus studied things like ALGOL. But neither seems to have looked at more advanced kinds of math than simple algebraic language. And, so far as I can tell, nor has almost anyone else since then. But if you want to see if you can interpret mathematical notation, you have to know what kind of grammar it uses. |
И эта возможность весьма вдохновляет. Потому что для того же устаревшего текста на естественном языке нет никакого способа сконвертировать его во что-то значимое. Однако в математике есть такая возможность. |
It's kind of exciting that it's possible to do this. Because if one was thinking of legacy ordinary language text, there's just no way one can expect to convert it to something understandable. But with math there is. |
язык или нотация ограничивают наше пространство мыслимого. |
...language—or a notation—can limit what one manages to think about. So what does that mean for mathematics? Well, I haven't completely solved the problem. But what I've found, at least in many cases, is that there are pictorial or graphical representations that really work much better than any ordinary language-like notation. Actually, bringing us almost back to the beginning of this talk, it's a bit like what's happened for thousands of years in geometry. In geometry we know how to say things with diagrams. That's been done since the Babylonians. And a little more than a hundred years ago, it became clear how to formulate geometrical questions in algebraic terms. But actually we still don't know a clean simple way to represent things like geometrical diagrams in a kind of language-like notation. And my guess is that actually of all the math-like stuff out there, only a comparatively small fraction can actually be represented well with language-like notation. But we as humans really only grock easily this language-like notation. So the things that can be represented that way are the things we tend to study. Of course, those may not be the things that happen to be relevant in nature and the universe. But that'd be a whole other talk, or much more. So I'd better stop here. |
При изучении обычного естественного языка были обнаружены различные историко-эмпирические законы. Пример — Закон Гримма, которые описывает переносы в согласных на индоевропейских языках. Мне было любопытно, можно ли найти подобные историко-эмпирические законы для математического обозначения. |
In the study of ordinary natural language there are various empirical historical laws that have been discovered. An example is Grimm's Law, which describes general historical shifts in consonants in Indo-European languages. I have been curious whether empirical historical laws can be found for mathematical notation. |