Section I.IV: Sentences#
A Sentence is a Limitation of Words over a Phrase in the Language’s Lexicon for any value of \(n \geq 1\).
Warning
This statement should not be interpretted as a schema for generating grammatical sentences. In general, Limitations are not grammatical. However, all grammatical sentences are Limitations.
In other words, this statement should be interpretted as a necessary syntactic pre-condition a Sentence must satisfy before it may be assigned semantic content.
The Corpus is the aggregate of all Sentences.
Note
The value of \(n\) in the preceding equation will be further specified after several definitions and theorems. It will be shown to be directly and necessarily related to the Word structure of \(\zeta\).
The full semantic hierarchy has now been formalized. The hierarchy is summarized as follows,
Strings: \(\iota, \alpha, \zeta\)
Sets: \(\Sigma, L, C\)
Character Membership: \(\iota \in \Sigma\)
Word Membership: \(\alpha \in L\)
Sentence Membership: \(\zeta \in C\)
These observations can be rendered into English,
All Characters, Words and Sentences are Strings.
The Alphabet, Languages and Corpus are sets of Strings.
All non-Empty Characters belong to an Alphabet.
All Words belong to the Language.
All Sentences belong to the Corpus.
Word Length#
Important
The Induction clause of Word Length relies on the Discovery Axiom and the Measureable Axiom to ensure for any Strings \(u \in L\), \(\neg(\sigma \subset_s u)\) and \(u \neq \varepsilon\).
Important
While Word Length will be primarily used on \(\zeta \in C\), it is important to note the definition is defined over all \(s \in S\). In other words, Word Length is a property of Strings, as can be seen in the example, “blargafaful buttons”.
Example Let \(ᚠ = \text{truth is beauty}\).
Let \(u_1 = \text{truth}\) and \(v_1 = \text{is beauty}\). Then \(u_1 \in L_{\text{english}}\) and \(ᚠ = (u_1)(\sigma)(v_1)\). Apply the Induction clause of Word Length,
Let \(u_2 = \text{is}\) and \(v_2 = \text{beauty}\).
Important
A selection of \(u_2 = \text{i}\) or \(u_2 = \text{is be}\) would not satisfy the condition \(s = {u}{\sigma}{v}\) in the Induction clause, which requires \(u\) and \(v\) to be delimited with \(\sigma\).
Then \(u_2 \in L_{\text{english}}\) and \(v_1 = (u_2)(\sigma)(v_2)\). Apply the Induction clause of Word Length,
Finally, note \(v_2 \in L_{\text{english}}\) and apply the Basis clause to \(v_2\),
Putting the recursion together,
∎
Example Let \(ᚠ = \text{palindromes vorpal semiordinlap}\)
Let \(u_1 = \text{palindromes}\) and \(v_1 = \text{vorpal semiordinlap}\). Then \(u_1 \in L_{\text{english}}\) and \(ᚠ = (u_1)(\sigma)(v_1)\). Apply the Induction clause of Word Length,
Let \(u_2 = \text{vorpal}\) and \(v_2 = \text{semiordinlap}\). Then \(u_2 \notin L_{\text{english}}\) and \(v_1 = (u_2)(\sigma)(v_2)\). Apply the Induction clause of Word Length,
Finally, note \(v_2 \in L_{\text{english}}\) and apply the Basis clause to \(v_2\),
Putting the recursion together,
∎
Important
As these examples demonstrate, the Word Length of a String is always relative to a given a Language. A subscript will be used to denote whether a Word Length is relative to a particular language,
Whereas,
Example Let \(L = L_{\text{english}}\). Let \(ᚠ = \text{observe how system into system runs}\). Consider \(ᚠ[[3]]\).
Let \(u_1 = \text{observe}\) and \(v_1 = \text{how system into system runs}\). Then \(ᚠ = (u_1)(\sigma)(v_1)\), \(u_1 \in L\) and \(3 > 1\). Therefore, by the Induction clause of Word Indices,
At the next step, let \(u_2 = \text{how}\) and \(v_2 = \text{system into system runs}\). Then \(v_1 = (u_2)(\sigma)(v_2)\), \(u_2 \in L\) and \(2 > 1\),
At the next step, let \(u_3 = \text{system}\) and \(v_3 = \text{into system runs}\). Then \(v_2 = (u_3)(\sigma)(v_3)\), \(u_3 \in L\) but \(1 = 1\), therefore,
∎
Example Let \(ᚠ = \text{the gobberwarts with my blurglecruncheon}\). Consider \(ᚠ[2]\).
Let \(u_1 = \text{"the"}\) and \(v_1 = \text{gobberwarts with my blurglecruncheon}\). Then \(ᚠ = (u_1)(\sigma)(v_1)\), \(u_1 \in L\) and \(2 > 1\). Therefore, by the Induction clause of Word Indices,
At the next step, let \(u_2 = \text{gobberwarts}\) and \(v_2 = \text{with my blurglecruncheon}\). Then \(v_1 = (u_2)(\sigma)(v_2)\) but \(u_2 \notin L\) and \(1 = 1\), so by the first condition of the Induction clause,
At the next step, let \(u_3 = \text{with}\) and \(v_3 = \text{my blurglecruncheon}\). Then \(v_2 = (u_3)(\sigma)(v_3)\), \(u_3 \in L\) and \(1 = 1\). So, the second condition of the Induction clause,
∎
Note
The next theorems will not be required for the final postulates, but they are given to indicate the type of results that may be established regarding the concept of Word Length. For the curious reader, the details can be found in Appendix I.II, Omitted Proofs.
Note
Theorem 1.4.1 and Theorem 1.4.2 demonstrate Word Length is fundamentally different than String Length with respect to the operation of concatenation. In Theorem 1.2.1, it was shown String Length sums over concatenation. Theorem 1.4.1 shows the corresponding property is not necessarily true for Word Length. This is an artifact of the potential destruction of semantic content that may occur upon concatenation. (The edge case of compound Words (e.g. daylight) makes the proof Theorem 1.4.2 particularly interesting.)
Indeed, most algebraic properties of String Length do not extend up the semantic hierarchy. For example, it is easily seen, in general,
However, there is a special class of sentences where this property does hold, as will be seen in Theorem 1.4.12.
Theorems#
The first theorem demonstrates the relationship between a Limitation and Word Length that was pointed out in the introduction of this subsection.
Proof Let \(\zeta \in C\). By definition of a Sentence, there exists \(n \in \mathbb{N}\) and \(p \in L_n\) such that
So that, applying the definition of Limitations,
By the definition of Word Length, with \(u = \alpha_1\) and \(v = \Pi_{i=1}^{n-1} \alpha_{i+1}\)
Since there are \(n\) words in \(p\), it follows the definition of Word Length will be applied \(n\) times, resulting in,
Now, apply the definition of Word Indices to (1). The Basis clause yields,
Using the Induction clause with \(u = \alpha_1\) and \(v = \Pi_{i=1}^{n-1} \alpha_{i+1}\),
Where the last equality follows from the Basis clause applied to \(v\). Continuing this process, it is found,
Therefore, since the Words in the Sentence correspond index by index to the Words in the Phrase, and the Word Length of the Sentence is equal to the Word Length of the Phrase, it follows,
∎
Note
The next theorem can be seen as a specialiation of Theorem 1.2.5 for the subdomain of the Corpus.
Proof Let \(\zeta \in C\). Let \(n = \Lambda(\zeta)\). Let \(s\),
Consider \(s^{-1}\),
From String Inversion and the fact \(l(\sigma) = 1\), it follows \(\sigma^{-1} = \sigma\). Using this fact, the application of Theorem 1.2.10 \(n\) times yields,
Reindex the terms on the RHS to match Limitation with \(j = n - i + 1\). Then, as \(i\) goes from \(1 \to n\), \(j\) goes \(n \to 1\) and visa versa,
Combining (1) and (2) and generalizing,
∎
Proof Let \(s, t \in D\). That is, assume, for some \(n, m \in \mathbb{N}\),
where \(n = \Lambda(s)\) and \(m = \Lambda(t)\).
The proof proceeds by induction on \(n\).
Basis: Assume \(n = 1\).
Then, by the Basis clause of Limitations, \(s = \alpha\) for some \(\alpha \in L\). By the Discovery Axiom, \(\neg(\sigma \subset_s \alpha)\).
Consider \(u = (\alpha)(\sigma)(t)\). By the Basis clause of Word Length,
Induction Assume for any \(u \in D\) with \(\Lambda(u) = n\),
Let \(s \in D\) such that \(\Lambda(s) = n + 1\). By the Induction clause of the Dialects and Limitations,
By the Induction clause of Word Length,
From this and \(\Lambda(s) = n + 1\), it is concluded \(\Lambda(v) = n\) and therefore satisfies the inductive hypothesis.
Consider \(\Lambda((s)(\sigma)(t))\).
But from (1), this reduces to,
Therefore, putting everything together, the Induction is complete,
Summarizing and generalizing,
∎
Important
Theorem 1.4.5 only applies to Strings quantified over the Dialect. If the theorem were quantified over the Corpus, i.e. semantic Sentences, then the inductive hypothesis would fail at the step where the induced String is decomposed,
To see this, note that when a Sentence has it’s first Word partitioned from it, there is no guarantee the resultant will also be a semantic Sentence, e.g. “we are the stuffed men” is a Sentence, but “are the stuffed men” is not a Sentence. Therefore, the theorem must be induced over the Dialect.
This may seem a strong restriction, but as the next two theorems establish, this result still applies to the Corpus.
Proof Let \(\zeta \in C\). By definition of a Sentence,
By the definition of a Dialect, \(\zeta \in D\).
Therefore, \(\zeta \in C \implies \zeta \in D\). This is exactly the definition of a subset,
∎
Proof Let \(\zeta, \xi \in C\).
By Theorem 1.4.6, \(C \subseteq D\). By definition of subsets,
Therefore, by Theorem 1.4.5,
∎
Proof Let \(\zeta \in C\). By Theorem 1.4.3,
By the definition of Sentences and Canonization Axiom,
By the definition of Canonization,
By the definition of Limitation, \(\Pi\) produces Strings through Concatenation. By Theorem 1.2.6, the Canon is closed over Concatenation. From this, it must be the case \(\zeta \in \mathbb{S}\). Therefore,
This is exactly the definition of subsets,
∎
Classes#
Note
Invertible Words are sometimes called semiordinalps in other fields of study. However, the term “semiordinalip” will be given a more precise and formal explication with the introducion and definition of Subvertible Sentences in the next section.
Proof Let \(\zeta \in C\).
(\(\rightarrow\)) Assume \(\zeta \in J\). By the definition of Invertible Sentences,
By Theorem 1.2.9,
By assumption, \(\zeta \in C\), therefore, by the definition of Invertible Sentences,
(\(\leftarrow\)) Assume \({\zeta}^{-1} \in J\), which implies \({\zeta}^{-1} \in C\). By assumption \(\zeta \in C\). Therefore, definition of Invertible Sentences,
Summarizing and generalizing,
∎
Proof Let \(\zeta \in J\). By the definition of Invertible Sentences,
By Theorem 1.4.4, this can be written,
where,
By definition of a Sentence and \({\zeta}^{-1} \in C\),
From this, it can be concluded every \({\zeta[[i]]}^{-1}\) in \(p\) must belong to \(L\), and each of those Words has an inverse that is also in \(L\).
By the definition of Invertible Words, the inverse of a Word can only belong to a Language if and only if the Word is invertible.
Therefore,
By Theorem 1.3.1,
Generalizing,
∎
Proof Let \(\zeta \in J\), let \(n = \Lambda(\zeta)\) and let \(i \in N_n\).
By Theorem 1.4.6 and assumption,
By Theorem 1.3.1,
Consider,
By Theorem 1.4.4,
And by definition of Sentences and Limitations,
Therefore,
By the Theoreom 1.4.8, \(C \subset \mathbb{S}\). By Theorem 1.3.3, Limitations are unique over the Canon, thus the only way two Limitations that belong to the Corpus can be equal to \(\zeta^{-1}\) is when,
Summarizing and generalizing,
∎
Proof Let \(\zeta \in J\) with \(n = \Lambda(\zeta)\)
By Theorem 1.4.10,
By definition of Invertible Words,
By Theorem 1.4.11,
Note for \(1 \leq i \leq n\), \(n - i + 1\) is a decreasing, consecutive function that goes \(n \to 1\). Therefore, every Word of the Inverse belongs to the Language and by by definition of Word Length,
Summarizing and generalizing,
∎
Partial Sentences#
Example Let \(ᚠ = \text{form is the possibility of structure}\). Then \(l(ᚠ) = 36\).
Consider \(ᚠ[:4]\). Applying the definition of Partial String, the Left Partial Sentence is given by,
Thus \(ᚠ[:4] = \text{form}\).
Consider \(ᚠ[10:]\). The Right Partial Sentence is given by,
Thus \(ᚠ[10:] = \text{he possibility of structure}\)
∎
Important
Note Partial Sentences are not part of the Corpus.
Proof Let \(s \in S\). Let \(n = l(s)\). Let \(i \in N_n\). The proof proceeds by induction on \(i\).
Basis: \(i = 1\)
By definition of Left Partial Strings,
Since \(s[i]\) is a single Character, it follows from the definition of String Length,
Induction: Assume for a fixed \(1 < i \leq n - 1\), \(l(s[:i]) = i\).
Since \(i\) is at most \(n - 1\), \(i + 1\) is at most \(n\). Therefore, \(s[:i+1]\) is defined. By the Induction clause Left Partial Strings,
By Thoerem 1.2.1,
The first term on the RHS is \(i\) by inductive hypothesis and the second term is \(1\) by definition of String Length,
The Induction is thus established for \(i \in N_n\). Summarizing and generalizing,
∎
Proof Let \(s \in S\). Let \(n = l(s)\). Consider \(s[i:]\) with \(i \in N_n\). Let
Then \(j \in N_n\), since \(i = 1 \implies j = n\) and \(i = n \implies j = 1\). The proof proceeds by induction on \(j\).
Basis: \(j = 1\). Then \(i = n\).
Induction: Assume for a fixed \(1 < j \leq n - 1\), \(l(s[i:]) = l(s) - i + 1\).
Now,
From which follows,
Therefore, \(s[i-1:]\) is defined. By the Induction clause of Right Partial Strings,
Therefore,
The first term on the RHS is \(l(s) - i + 1\) by inductive hypothesis,
Rewriting to make the induction obvious,
The induction is established. Summarizing and generalizing,
∎
Note
The proofs of Theorem 1.4.15 and Theorem 1.4.16 are similar to the proofs of Theorem 1.4.13 and Theorem 1.4.14. The proofs are omitted and can be found in Appendix I.II, Omitted Proofs.
∎
∎
Proof Let \(s \in S\) with \(n = l(s)\).
If \(n = 1\), then \(s[i+1:]\) is undefined, so the proof proceeds by induction on String Length, starting at \(l(s) = 2\).
Basis: \(n = 2\). Then, by definition of Character Indices,
By definition of Partial Strings,
Thus,
Induction: Assume for all \(i\) and all \(s \in S\), \(s = (s[:i])(s[i+1:])\) for some fixed \(l(s) = m\),
Let \(t \in S\) such that \(l(t) = m + 1\).
∎