-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
220 lines (220 loc) · 96.5 KB
/
index.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
{
"version": "https://jsonfeed.org/version/1",
"title": "Agente Aleatório",
"home_page_url": "https://rcalsaverini.github.io/blog",
"feed_url": "https://rcalsaverini.github.io/blog/index.json",
"description": "Constantemente aumentando a entropia do universo.",
"items": [
{
"id": "https://rcalsaverini.github.io/blog/preserve-harmonies-swapping-3rds-7ths.html",
"url": "https://rcalsaverini.github.io/blog/preserve-harmonies-swapping-3rds-7ths.html",
"title": "Does swapping 3rds and 7ths preserve harmonic feel?",
"content_html": "<p>The ii-V-I chord changes are arguably the most basic stable of jazz harmony, and exploring what's so compelling about this chord is always a good way to diving into what's harmony in jazz is all about. Recently the youtuber <a href=\"https://www.youtube.com/channel/UC4PIiYewI1YGyiZvgNlJNrA\">Charles Cornell</a> published a <a href=\"https://www.youtube.com/watch?v=alsQOE0vuoc\">nice video</a> in his channel exploring how ii-V-I changes in major have a very nice pattern of leading tones involving thirds and sevenths, and how this drives the harmony forward in such a compelling way, by stacking compelling resolutions that continuously release the tension built by moving the chords around.</p>\n<p>To drive the explanation home, Cornell made a strong assertion: most or all of the actual harmonic juice of a chord, at least in the context of jazz harmony is contained in the third and seventh. To drive this home he shows live in the piano how, if you throw away the root and fifth and only play thirds and sevenths, most of the harmonic feel of the ii-V-I changes are retained <sup class=\"footnote-ref\"><a href=\"#fn-footnote1\" id=\"fnref-footnote1\" data-footnote-ref>1</a></sup>.</p>\n<p>Then, further in the video <sup class=\"footnote-ref\"><a href=\"#fn-footnote2\" id=\"fnref-footnote2\" data-footnote-ref>2</a></sup>, Cornell points out that the resolution from ii to V and from V to I work by inverting the resolving the seventh down a half step into the third of the new chord. This also makes the third of the previous chord, that's kept as a kind of pedal point in the new chord, to become the new seventh. In summary: thirds becomes sevenths and sevenths resolve into thirds. And that's the basic mechanism of the most widespread harmonic language of jazz.</p>\n<h2><a href=\"#thirds-and-sevenths\" aria-hidden=\"true\" class=\"anchor\" id=\"thirds-and-sevenths\"></a>Thirds and Sevenths</h2>\n<p>That's not something super new and surprising but I think Cornell makes a good job in simplifying it and making it very pedagogic. But his explanation kind of lead me to ask myself some questions. In particular, forgetting about the resolutions and leading tones a bit, if most of the taste and color of a chord is contained in the thirds and sevenths, what happens if I change the other notes around it?</p>\n<p>I could start by taking a chord, stripping the root and fifth, and tacking on two other random notes there and see what happens. Right? Let's try.</p>\n<p>The first measure in the following embedded sheet is a ii-V-I in C major. The chords are, of course, Dmin7 ➞ G7 ➞ Cmaj7. On the second measure I kept all thirds and sevenths constant and randomly moved the other notes. Of course, after randomly moving notes, the new chord names are kind of non-sensical, but the best I can do to name them is this<sup class=\"footnote-ref\"><a href=\"#fn-footnotechordnames\" id=\"fnref-footnotechordnames\" data-footnote-ref>3</a></sup>: G7(add4) ➞ Csus4maj7 ➞ Asus2sus4</p>\n<p>There's some weirdness because of the random bass notes, but the functions are still kind of recognizable, aren't they? They still sound like tension and release.</p>\n<center>\n <iframe\n src=\"https://flat.io/embed/622e575518f76000120ed90e?sharingKey=a033810480c2d5941c8061e90ecddf15a5d540e19ef724590df11e2034e187e4fe2dd9d7b6a93e87a09b5adaef4f8c638108e361fb88370e2db4e5f7c1dab37e\"\n height=\"275\"\n width=\"50%\"\n frameBorder=\"0\">\n </iframe>\n</center>\n<h2><a href=\"#chord-transformations\" aria-hidden=\"true\" class=\"anchor\" id=\"chord-transformations\"></a>Chord transformations</h2>\n<p>I was asking myself for more systematic chord transformations we could do that would preserve this "3rd + 7th" structure and the best I could do was to state it as follows: we want to find reasonable chords (whatever that means) for which the former third and seventh are still third and seventh in the new chord in some meaningful way. Let's investigate this.</p>\n<p>There are few transformations we can do that preserve same set of third and seventh:</p>\n<ul>\n<li>\n<p>We could change the "quality" of the third and seventh (major to minor, etc) by moving the other pitches. For example, we could turn a Cmaj7 (C E G B) into a Cmin7 (C♯ E G♯ B). That would turn E from major to minor third and the B from major to minor seventh. That's kind of boring though, right? It works, but it does generate anything incredibly new.</p>\n<p>This would turn our original ii-V-I Dmin7 ➞ G7 ➞ Cmaj7 into C♯maj7 ➞ A♭dim7 ➞ C♯min7. You can listen how that sounds in the sheet below. This looks interesting. The new chords certainly feel like they work together and convey similar harmonic movement. But it also feels a bit like simple modal borrowing instead of any super weird transformation. It merits more analysis thought. Perhaps I'll come back to this later.</p>\n<center>\n <iframe\n src=\"https://flat.io/embed/622e7e4718f760001211887d?sharingKey=b8882a97e994188a69b7e00faaac06bb1789100935284cbf63335b2784c4747959c90b7009ca43cd393f5d8f43898366c4e19c0eeb2ead105d6e8fa420f38db1\"\n height=\"275\"\n width=\"50%\"\n frameBorder=\"0\">\n </iframe>\n</center>\n</li>\n<li>\n<p>We could also use the same two notes in other functions of the chord (the third as root, the seventh as fifth, etc) but that would kind of defeat the purpose of using the third and seventh as the core of the harmonic taste and color of a chord. So I won't consider this here.</p>\n</li>\n<li>\n<p>We could "swap" the third with the seventh. For that to be possible, the interval between the original third and seventh must be such that if we invert that interval, we could still have a "plausible third" (minor or major, or perhaps a sus2/sus4 chord at most) and a "plausible seventh" (minor, or major or perhaps a major sixth). <sup class=\"footnote-ref\"><a href=\"#fn-footnote3\" id=\"fnref-footnote3\" data-footnote-ref>4</a></sup></p>\n</li>\n</ul>\n<p>Let's explore that last idea further. Here's a table with all "plausible thirds" and "plausible sevenths" with the respect intervals between them<sup class=\"footnote-ref\"><a href=\"#fn-footnote4\" id=\"fnref-footnote4\" data-footnote-ref>5</a></sup>:</p>\n<table>\n<thead>\n<tr>\n<th align=\"center\"></th>\n<th align=\"center\">M2 (sus2)</th>\n<th align=\"center\">m3 (minor)</th>\n<th align=\"center\">M3 (major)</th>\n<th align=\"center\">P4 (sus4)</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td align=\"center\">M7</td>\n<td align=\"center\">M6</td>\n<td align=\"center\">m6</td>\n<td align=\"center\">P5</td>\n<td align=\"center\">TT</td>\n</tr>\n<tr>\n<td align=\"center\">m7</td>\n<td align=\"center\">m6</td>\n<td align=\"center\">P5</td>\n<td align=\"center\">TT</td>\n<td align=\"center\">P4</td>\n</tr>\n<tr>\n<td align=\"center\">M6</td>\n<td align=\"center\">P5</td>\n<td align=\"center\">TT</td>\n<td align=\"center\">P4</td>\n<td align=\"center\">M3</td>\n</tr>\n</tbody>\n</table>\n<p>After inversion:</p>\n<table>\n<thead>\n<tr>\n<th align=\"center\"></th>\n<th align=\"center\">M2 (sus2)</th>\n<th align=\"center\">m3 (minor)</th>\n<th align=\"center\">M3 (major)</th>\n<th align=\"center\">P4 (sus4)</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td align=\"center\">M7</td>\n<td align=\"center\">m3</td>\n<td align=\"center\">M3</td>\n<td align=\"center\">P4</td>\n<td align=\"center\">TT</td>\n</tr>\n<tr>\n<td align=\"center\">m7</td>\n<td align=\"center\">M3</td>\n<td align=\"center\">P4</td>\n<td align=\"center\">TT</td>\n<td align=\"center\">P5</td>\n</tr>\n<tr>\n<td align=\"center\">M6</td>\n<td align=\"center\">P4</td>\n<td align=\"center\">TT</td>\n<td align=\"center\">P5</td>\n<td align=\"center\">m6</td>\n</tr>\n</tbody>\n</table>\n<p>Comparing those tables it's clear that: all "plausible" pair of "third and seventh" are between M3 and M6 in distance. So, only inversions that will be in this range can be rationalized. So X6sus4 and Xmaj7sus2 chords are excluded (thank god, because those are some extra silly names for chords). But all other chords in the that table can be transformed into something "plausible".</p>\n<p>Let's try to understand the mechanism. Consider the Cmaj7 chord. It's major third is an E and it's major seventh is a B (an interval of a perfect fifth). After inverting their roles, we would need a chord in which B functions as a third and E as a seventh. This would imply an inverted interval of a perfect fourth between them. There are only two options of that interval between a 3rd and a 7th: M3 to M6 – a X6 chord – and P4 to m7 – a X7sus4" chord. To find those chords, we need to find which roots would have an B as major third or perfect fourth. It turns out to be G and F♯ respectively. So, finally the possible transformations are:</p>\n<ul>\n<li>turn Cmaj7 into G6</li>\n<li>turn Cmaj7 into F♯7sus4</li>\n</ul>\n<p>Now that's quite interesting.</p>\n<p>Below I built a table with all transformations from which we can lift a few new transformed ii-V-I changes to listen and see what they sound like. One I think is particularly compelling is the following:</p>\n<ul>\n<li>Dmin7 becomes A♭6</li>\n<li>G7 becomes Dmin6</li>\n<li>Cmaj7 becomes G6</li>\n</ul>\n<p>What I find so compeling is that there are two major 6th chords doing completely different jobs in this progression. Listen to this:</p>\n<center>\n <iframe\n src=\"https://flat.io/embed/6230e77b4382750012abd0f4?sharingKey=8fba0d961681a045887174b3ac4de922ccbe02162baf318a99edc883a11b2ff9d6f5e4b99e664ff349a056a511e63b74d7dd1f060b9b88d58f97ccfa31d06a0c\"\n height=\"275\"\n width=\"50%\"\n frameBorder=\"0\">\n </iframe>\n</center>\n<h2><a href=\"#table-with-all-possible-transformations\" aria-hidden=\"true\" class=\"anchor\" id=\"table-with-all-possible-transformations\"></a>Table with all possible transformations</h2>\n<p>Let's try to build a table for the transformations.</p>\n<p>All chords that can be transformed are all combinations in the 3rds and 7ths table above except the ones excluded above. So (we're going to use C as root)<sup class=\"footnote-ref\"><a href=\"#fn-footnotesus46\" id=\"fnref-footnotesus46\" data-footnote-ref>6</a></sup>:</p>\n<table>\n<thead>\n<tr>\n<th></th>\n<th>M2</th>\n<th>m3</th>\n<th>M3</th>\n<th>P4</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>M7</td>\n<td>-</td>\n<td>Cminmaj7</td>\n<td>Cmaj7</td>\n<td>Cmaj7sus4</td>\n</tr>\n<tr>\n<td>m7</td>\n<td>C7sus2</td>\n<td>Cmin7</td>\n<td>C7</td>\n<td>C7sus4</td>\n</tr>\n<tr>\n<td>M6</td>\n<td>C6sus2</td>\n<td>Cmin6</td>\n<td>C6</td>\n<td>-</td>\n</tr>\n</tbody>\n</table>\n<p>The logic to build all tranformations is the following:</p>\n<ul>\n<li>Start from the original chord.</li>\n<li>Find out the interval between it's 3rd and 7th.</li>\n<li>Invert this interval and find all pair of 3rd and 7th compatible with the new inverted interval.</li>\n<li>Choose one of those pairs.</li>\n<li>Assign the pitch of the old 7th to the new 3rd, and find what must be the root such that this pitch is the 3rd.</li>\n<li>Add the 5th from that root and build the chord.</li>\n</ul>\n<p>Here are the results <sup class=\"footnote-ref\"><a href=\"#fn-footnotelazy\" id=\"fnref-footnotelazy\" data-footnote-ref>7</a></sup>.</p>\n<table>\n<thead>\n<tr>\n<th>old (3,7)</th>\n<th>old chord</th>\n<th>old interval</th>\n<th>new interval</th>\n<th>new (3,7)</th>\n<th>new root</th>\n<th>new chord</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>(M3, M7) = (E, B)</td>\n<td>Cmaj7</td>\n<td>P5</td>\n<td>P4</td>\n<td>(M3, M6) = (B, E)</td>\n<td>G</td>\n<td>G6</td>\n</tr>\n<tr>\n<td>(M3, M7) = (E, B)</td>\n<td>Cmaj7</td>\n<td>P5</td>\n<td>P4</td>\n<td>(P4, m7) = (B, E)</td>\n<td>F♯</td>\n<td>F♯7sus4</td>\n</tr>\n<tr>\n<td>(M3, m7) = (E, Bb)</td>\n<td>C7</td>\n<td>TT</td>\n<td>TT</td>\n<td>(P4, M7) = (Bb, E)</td>\n<td>F</td>\n<td>Fmaj7sus4</td>\n</tr>\n<tr>\n<td>(M3, m7) = (E, Bb)</td>\n<td>C7</td>\n<td>TT</td>\n<td>TT</td>\n<td>(m3, M6) = (Bb, E)</td>\n<td>G</td>\n<td>Gmin6</td>\n</tr>\n<tr>\n<td>(M3, M6) = (E, A)</td>\n<td>C6</td>\n<td>P4</td>\n<td>P5</td>\n<td>(M3, M7) = (A, E)</td>\n<td>F</td>\n<td>Fmaj7</td>\n</tr>\n<tr>\n<td>(M3, M6) = (E, A)</td>\n<td>C6</td>\n<td>P4</td>\n<td>P5</td>\n<td>(m3, m7) = (A, E)</td>\n<td>F♯</td>\n<td>F♯min7</td>\n</tr>\n<tr>\n<td>(M3, M6) = (E, A)</td>\n<td>C6</td>\n<td>P4</td>\n<td>P5</td>\n<td>(M2, M6) = (A, E)</td>\n<td>G</td>\n<td>G6sus2</td>\n</tr>\n<tr>\n<td>(m3, M7) = (Eb, B)</td>\n<td>Cminmaj7</td>\n<td>m6</td>\n<td>M3</td>\n<td>(P4, M6) = (B, Eb)</td>\n<td>F♯</td>\n<td>F♯6sus4</td>\n</tr>\n<tr>\n<td>(m3, m7) = (Eb, Bb)</td>\n<td>Cmin7</td>\n<td>P5</td>\n<td>P4</td>\n<td>(P4, M7) = (Bb, E)</td>\n<td>F</td>\n<td>Fmaj7sus4</td>\n</tr>\n<tr>\n<td>(m3, m7) = (Eb, Bb)</td>\n<td>Cmin7</td>\n<td>P5</td>\n<td>P4</td>\n<td>(M3, M6) = (Bb, E)</td>\n<td>F♯</td>\n<td>F♯6</td>\n</tr>\n<tr>\n<td>(m3, M6) = (Eb, A)</td>\n<td>Cmin6</td>\n<td>TT</td>\n<td>TT</td>\n<td>(P4, M7) = (A, Eb)</td>\n<td>E</td>\n<td>Emaj7sus4</td>\n</tr>\n<tr>\n<td>(m3, M6) = (Eb, A)</td>\n<td>Cmin6</td>\n<td>TT</td>\n<td>TT</td>\n<td>(M3, m7) = (A, Eb)</td>\n<td>F</td>\n<td>F7</td>\n</tr>\n<tr>\n<td>(m3, M6) = (Eb, A)</td>\n<td>Cmin6</td>\n<td>TT</td>\n<td>TT</td>\n<td>(m3, M6) = (A, Eb)</td>\n<td>F♯</td>\n<td>F♯min6</td>\n</tr>\n</tbody>\n</table>\n<h3><a href=\"#footnotes\" aria-hidden=\"true\" class=\"anchor\" id=\"footnotes\"></a>Footnotes</h3>\n<table>\n<thead>\n<tr>\n<th align=\"center\">Semitones</th>\n<th align=\"center\">Name</th>\n<th align=\"center\">Symbol</th>\n<th align=\"center\">Inversion</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td align=\"center\">0</td>\n<td align=\"center\">unison</td>\n<td align=\"center\">–</td>\n<td align=\"center\">8va</td>\n</tr>\n<tr>\n<td align=\"center\">1</td>\n<td align=\"center\">minor second</td>\n<td align=\"center\">m2</td>\n<td align=\"center\">M7</td>\n</tr>\n<tr>\n<td align=\"center\">2</td>\n<td align=\"center\">major second</td>\n<td align=\"center\">M2</td>\n<td align=\"center\">m7</td>\n</tr>\n<tr>\n<td align=\"center\">3</td>\n<td align=\"center\">minor third</td>\n<td align=\"center\">m3</td>\n<td align=\"center\">M6</td>\n</tr>\n<tr>\n<td align=\"center\">4</td>\n<td align=\"center\">major third</td>\n<td align=\"center\">M3</td>\n<td align=\"center\">m6</td>\n</tr>\n<tr>\n<td align=\"center\">5</td>\n<td align=\"center\">perfect fourth</td>\n<td align=\"center\">P4</td>\n<td align=\"center\">P5</td>\n</tr>\n<tr>\n<td align=\"center\">6</td>\n<td align=\"center\">tritone</td>\n<td align=\"center\">TT</td>\n<td align=\"center\">TT</td>\n</tr>\n<tr>\n<td align=\"center\">7</td>\n<td align=\"center\">perfect fifth</td>\n<td align=\"center\">P5</td>\n<td align=\"center\">P4</td>\n</tr>\n<tr>\n<td align=\"center\">8</td>\n<td align=\"center\">minor sixth</td>\n<td align=\"center\">m6</td>\n<td align=\"center\">M3</td>\n</tr>\n<tr>\n<td align=\"center\">9</td>\n<td align=\"center\">major sixth</td>\n<td align=\"center\">M6</td>\n<td align=\"center\">m3</td>\n</tr>\n<tr>\n<td align=\"center\">10</td>\n<td align=\"center\">minor seventh</td>\n<td align=\"center\">m7</td>\n<td align=\"center\">M2</td>\n</tr>\n<tr>\n<td align=\"center\">11</td>\n<td align=\"center\">major seventh</td>\n<td align=\"center\">M7</td>\n<td align=\"center\">m2</td>\n</tr>\n<tr>\n<td align=\"center\">12</td>\n<td align=\"center\">perfect octave</td>\n<td align=\"center\">8va</td>\n<td align=\"center\">–</td>\n</tr>\n</tbody>\n</table>\n<section class=\"footnotes\" data-footnotes>\n<ol>\n<li id=\"fn-footnote1\">\n<p>See the demonstration on the first 1:30 minutes of the video. <a href=\"#fnref-footnote1\" class=\"footnote-backref\" data-footnote-backref data-footnote-backref-idx=\"1\" aria-label=\"Back to reference 1\">↩</a></p>\n</li>\n<li id=\"fn-footnote2\">\n<p>See the explanation beggining in 6:29 minutes into the video. <a href=\"#fnref-footnote2\" class=\"footnote-backref\" data-footnote-backref data-footnote-backref-idx=\"2\" aria-label=\"Back to reference 2\">↩</a></p>\n</li>\n<li id=\"fn-footnotechordnames\">\n<p>I'm also aware some of the chord names are quite ridiculous. I'm just using them to facilitate the identification of the "thirds and sevenths". <a href=\"#fnref-footnotechordnames\" class=\"footnote-backref\" data-footnote-backref data-footnote-backref-idx=\"3\" aria-label=\"Back to reference 3\">↩</a></p>\n</li>\n<li id=\"fn-footnote3\">\n<p>To be clear: I'm kind of abusing a lot from the language here by calling describing suspensions as "thirds" and sixths as "sevenths". I'm cognizant of that. <a href=\"#fnref-footnote3\" class=\"footnote-backref\" data-footnote-backref data-footnote-backref-idx=\"4\" aria-label=\"Back to reference 4\">↩</a></p>\n</li>\n<li id=\"fn-footnote4\">\n<p>I'm using the following nomenclature and symbols for intervals: <a href=\"#fnref-footnote4\" class=\"footnote-backref\" data-footnote-backref data-footnote-backref-idx=\"5\" aria-label=\"Back to reference 5\">↩</a></p>\n</li>\n<li id=\"fn-footnotesus46\">\n<p>Yeah, just to be fair, this are more reasonable names for some of those chords. Of course this would depend a lot on the context:</p>\n<table>\n<thead>\n<tr>\n<th>Weird name I'm using</th>\n<th>Tentative better name</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>Cmaj7sus2</td>\n<td>G/C</td>\n</tr>\n<tr>\n<td>Cmaj7sus4</td>\n<td>G7(no 5)/C</td>\n</tr>\n<tr>\n<td>C7sus2</td>\n<td>Gmin/C</td>\n</tr>\n<tr>\n<td>C7sus4</td>\n<td>Gmin7(no 5)/C</td>\n</tr>\n<tr>\n<td>C6sus2</td>\n<td>Gsus2/C</td>\n</tr>\n<tr>\n<td>C6sus4</td>\n<td>Dmin(add 4)/C with an omitted root?</td>\n</tr>\n</tbody>\n</table>\n<a href=\"#fnref-footnotesus46\" class=\"footnote-backref\" data-footnote-backref data-footnote-backref-idx=\"6\" aria-label=\"Back to reference 6\">↩</a>\n</li>\n<li id=\"fn-footnotelazy\">\n<p>I got lazy in the middle of filling this up by hand. Later I'll implement some code to run the algorithm and finish the table. In the meantime, I leave this as an exercise to the reader. <a href=\"#fnref-footnotelazy\" class=\"footnote-backref\" data-footnote-backref data-footnote-backref-idx=\"7\" aria-label=\"Back to reference 7\">↩</a></p>\n</li>\n</ol>\n</section>\n",
"summary": "\"Do chords that swapp thirds and sevenths retain the same \\\"harmonic feel\\\"\"",
"date_published": "2022-03-13T00:00:00-00:00",
"image": "",
"authors": [
{
"name": "Rafael S. Calsaverini",
"url": "",
"avatar": ""
}
],
"tags": [
"Music Theory",
"Chord transformations"
],
"language": "en"
},
{
"id": "https://rcalsaverini.github.io/blog/neural-nets-for-symbolic-optimization.html",
"url": "https://rcalsaverini.github.io/blog/neural-nets-for-symbolic-optimization.html",
"title": "Could Neural Nets be used for Symbolic Optimization? Maybe.",
"content_html": "<h3><a href=\"#whats-symbolic-optimization\" aria-hidden=\"true\" class=\"anchor\" id=\"whats-symbolic-optimization\"></a>What's Symbolic Optimization</h3>\n<p>A while ago I was entertaining problems in the intersection of symbolic manipulation of expressions and Deep Learning. In particular I was interested in finding "optimal" expressions in some way. So, imagine you have some grammar $G$ that describe a set of expressions, let's call it $\\mathrm{Exp}_G$, and suppose we have a real-valued function that takes an expression and maps into a number $f: \\mathrm{Exp}_G \\to \\mathbb{R}$. The\nproblem I want to discuss is finding expressions that minimize that function:</p>\n<p>$$\ne^{\\star} = \\arg \\min_{e \\in \\mathrm{Exp}_G} f(e)\n$$</p>\n<h3><a href=\"#a-toy-version-of-the-problem\" aria-hidden=\"true\" class=\"anchor\" id=\"a-toy-version-of-the-problem\"></a>A toy version of the problem</h3>\n<p>Let's discuss one particular toy version of this problem. It might look like a silly problem, but it's simple enough to illustrate the concept and non-trivial enough to avoid simple solutions.</p>\n<p>Suppose we have a training set that consist of a sample of strings. We hold the hypothesis that there's an underlying logic or grammar that generated those strings and we want to describe commmon patterns in these strings by finding a regular expression that would match as many of those strings as possible. We'd like to use this regular expression to check if future out-of-sample strings we come across display the same patterns and belong to this group.</p>\n<p>As an example, suppose our training set consists of the names "Rafael", "Gabriel" and "Manoel". Of course we have infinite regular expressions that match all those names. In particular we have the really trivial option <code>/Rafael|Gabriel|Manoel/</code> that would certainly match all strings but would probably fail to match out-of-sample items that do match interesting patterns in the training set – for example "Emanuel". In a way, this looks a lot like overfitting. Another failure mode would be underfitting and choosing the regex <code>/.\\*/</code>, that certainly fits all in-sample items but would also accept all out-of-sample items, even those that don't match any interesting patterns in the training set – for example "Pedro". Probably what we want is something like <code>/.+el/</code> (or perhaps something with a bit more structure if this is not sufficiently constrained and we want to avoid matching the string "chapel").</p>\n<p>So, we have to impose additional requirements and constraints or, in ML parlance, additional regularizations that would make our regular expression regularize better. It could be interesting, for example, to find the regular expression that fits as many items as possible but also fail to match a set of randomly chosen strings (a negative sample). Or to find the regular expression that generates a reasonably sized match set but still match as many items as possible. That can be perhaps stated as finding the regular expression $e$ that minimizes a function:</p>\n<p>$$\nf(e) = − \\sum_{s \\in \\mathrm{D}} \\mathtt{Match}_e(s) + \\lambda \\Lambda(e)\n$$</p>\n<p>Where:</p>\n<ul>\n<li>$\\mathtt{Match}_e(s)$ is 1 if the regular expression e matches the string s and 0 otherwise; and</li>\n<li>$\\Lambda(e)$ is some regularization that penalizes regular expressions that are either too "broad" or too "restricted". For example we could penalizing matching random strings and penalize expressions that have a finite (or\ntoo small) match set.</li>\n</ul>\n<h3><a href=\"#what-other-things-we-could-solve-if-we-learn-how-to-do-this\" aria-hidden=\"true\" class=\"anchor\" id=\"what-other-things-we-could-solve-if-we-learn-how-to-do-this\"></a>What other things we could solve if we learn how to do this?</h3>\n<p>This particular problem of symbolic optimization have been bugging me for a while. There seems to be a lot of things we could do if we could solve this problem. Here's a non-exhaustive list.</p>\n<ul>\n<li><strong>Symbolic Regression</strong>: we could be interested in finding the best analytical function to represent a set of data points. In this case the grammar $G$ would be a simple grammar of mathematical expressions containing things nodes (numerical values, input variables), unary operations (like $\\sin$, $\\cos$, $\\exp$, etc), binary operators (like $+$, $−$, $×$, etc) and so on. An expression in this grammar would be a mathematical expression like $y = \\sin(\\pi × x_1) × exp(−4.6 × x_2)$. The function we want to minimize would be, for example, the least-squares error of fitting this expression over a set of points:\n$$f(e) = \\sum_{k=1}^{N} (y_k − \\mathtt{Eval}_e(\\mathbf{x}_k))^2$$\nWhere $\\mathtt{Eval}_e(\\mathbf{x})$ is a procedure that evaluates the expression $e$ at the input $\\mathbf{x}$.</li>\n<li><strong>Learning to Parse</strong>: another intriguing problem is learning how to parse strings in a certain domain. For example, imagine we have a bunch of strings representing addresses in a culture or systems we're not familiar with and we want to learn how to extract parts. How would we go about learning what structures are interesting and what parts deserve a name? We could optimize through grammars that parse those strings into parts that are reusable and build an objective function that rewards reusability of parts. That's a very handwavy example, but it can be made more concrete.</li>\n<li><strong>Enhancing code-completion by leveraging test cases</strong>: there are many ML-based code completion products today (like <a href=\"https://www.tabnine.com/\">Tabnine</a>, <a href=\"https://www.kite.com/\">kite</a>, the new <a href=\"https://devblogs.microsoft.com/visualstudio/the-making-of-intellicodes-first-deep-learning-model-a-research-journey/\">Visual Studio Intellisense</a> and <a href=\"https://copilot.github.com/\">Github's Copilot</a>) but all of them look only into the immediate vicinity of the code and offer suggestions based on the environment of the code. Perhaps one could rank order the code suggestions by how well it passes a set of test cases? That would probably help the programmer narrow down the correct code completing even better. Such a procedure could be framed as symbolic optimization<sup class=\"footnote-ref\"><a href=\"#fn-idris\" id=\"fnref-idris\" data-footnote-ref>1</a></sup>.</li>\n</ul>\n<h3><a href=\"#why-is-it-hard\" aria-hidden=\"true\" class=\"anchor\" id=\"why-is-it-hard\"></a>Why is it hard?</h3>\n<p>Now, this looks interesting but it actually seems very intractable. First of all, the function\n$f$ might be expensive to calculate. Even if it isn't, the space of possible expressions is combinatorially large. Probably factorially large in the size of the grammar G and depth of the expressions being considered. Pure enumeration is unfeasible and techniques like genetic programming don't deal well with high dimensional inputs. Something like Simulated Annealing can deal better with big combinatorial spaces, but convergence is slow.</p>\n<p>Wouldn't it be nice if we could somehow turn this into gradient descent over a continuous space of analytical functions? That's the easiest and most tractable type of optimization we know. If we can map somehow, even approximately, a combinatorial optimization into a continous, $\\mathbb{R}^n$ valued optimization, that would the best deal we could get, wouldn't it?</p>\n<h4><a href=\"#how-to-make-it-easier\" aria-hidden=\"true\" class=\"anchor\" id=\"how-to-make-it-easier\"></a>How to make it easier?</h4>\n<p>So, here's an idea that ocurred to me a while ago, when I was reading the Neural Architecture Optimization paper<sup class=\"footnote-ref\"><a href=\"#fn-naopaper\" id=\"fnref-naopaper\" data-footnote-ref>2</a></sup>. Suppose we had a pair of function, let's call it an Encoder/Decoder pair, that map our expressions into and onto real valued vectors. That is, we a pair of functions:</p>\n<p>$$E: \\mathrm{Exp}_G \\to \\mathbb{R}^k$$</p>\n<p>and</p>\n<p>$$D: \\mathbb{R}^k \\to \\mathrm{Exp}_G$$</p>\n<p>such that $D(E(e)) = e$ for all $e \\in \\mathrm{Exp}_G$.</p>\n<p>If we had such a pair of functions, we could, instead of finding the expression $e$ that optimizes $f(e)$, we could try to find the vector $\\mathbf{x} \\in \\mathbb{R}^k$ that optimizes $f(D(\\mathbf{x}))$. Now we're optimizing over a continuous space!!! That's a bit of progress but it's still not perfect: we don't know how to calculate gradients of $f(D(\\mathbf{x}))$ with respect to the vector $\\mathbf{x}$, since there's the intervening combinatorial space of the expressions.</p>\n<h4><a href=\"#neural-networks-to-the-rescue\" aria-hidden=\"true\" class=\"anchor\" id=\"neural-networks-to-the-rescue\"></a>Neural Networks to the rescue.</h4>\n<p>Let's first focus on the Encoder/Decoder pair. What if we could train functions to learn this bijection? Even if it's not an exact bijection, it could be a start, right?</p>\n<p>So, imagine we have a parametric family of functions we could use to find a suitable encoder:</p>\n<p>$$E: \\Theta_E \\times \\mathrm{Exp}_G \\to \\mathbb{R}^k$$</p>\n<p>where $\\Theta_E$ is some space of parameters. We could try to find a function that evaluates how well the vector $E(\\theta_E, e)$ encodes the expression $e \\in \\mathrm{Exp}_G$ and find the parameter $\\theta_E \\in \\Theta_E$ that optimizes that "well-encodability" function. If the parameter space is nice and the "well-encodability" function is continuous this could be done by gradient descent.</p>\n<p>We could do the same with the decoder:</p>\n<p>$$D: \\Theta_D \\times \\mathbb{R}^k \\to \\mathrm{Exp}_G $$</p>\n<p>If we put the encoder and decoder together, one good candidate for a measure of "well-encodability/decodability" is the reconstruction loss<sup class=\"footnote-ref\"><a href=\"#fn-granted\" id=\"fnref-granted\" data-footnote-ref>3</a></sup>:</p>\n<p>$$\n\\ell(\\theta_E, \\theta_D) = \\sum_{k=1}^N d(e_k, D(\\theta_D, E(\\theta_E, e_k)))\n$$</p>\n<p>where $d(e, e')$ is a measure of how distant two expressions are. In summary, we would:</p>\n<ul>\n<li>sample a random expression $e$</li>\n<li>use the encoder to generate it's latent encoding vector $\\mathbf{x} = E(\\theta_E, e)$;</li>\n<li>use the decoder to recover the a new expression $e' = D(\\theta_D, \\mathbf{x})$;</li>\n<li>adjust the parameters $\\theta_E$ and $\\theta_D$ to minimize the distance between the expressions $e$ with $e'$.</li>\n</ul>\n<p>Now, after we have learned the encoder and decoder, we could now go back to our original optimization problem. We still want to find the best expression that minimizes $f(e)$ and we want to do that by minimizing $f(D(x, \\theta_D))$. Perhaps we can apply the same trick above? We could try to learn a function:</p>\n<p>$$V: \\Theta_V \\times \\mathbb{R}^k \\to \\mathbb{R} $$</p>\n<p>that given the vector embedding $\\mathbf{x} = E(\\theta_E, e)$ associated to an expression $e$, learns to estimate the value of $f(e)$. One possible way would be to:</p>\n<ul>\n<li>sample a random expression $e$</li>\n<li>use the encoder to recover its associated embedding $\\mathbf{x} = E(\\theta_E, e)$;</li>\n<li>compare the value of $f(e)$ with the value of $V(\\theta_V, \\mathbf{x})$ and adjust the parameter $\\theta_V$ to minimize the error.</li>\n</ul>\n<p>That could be achieved by gradient descending the loss function:</p>\n<p>$$\n\\ell(\\theta_V) = \\sum_{k=1}^{N} (V(\\theta_V, E(\\theta_E, e)) - f(e))^2\n$$</p>\n<p>So, in the end we could simulaneously descend into the three parameters $\\theta_E$, $\\theta_D$ and $\\theta_V$ to minimize the following loss:</p>\n<p>$$\n\\ell(\\theta_E, \\theta_D, \\theta_V) = \\sum_{k=1}^N\\left[d(e_k, D(\\theta_D, E(\\theta_E, e_k))) + (V(\\theta_V, E(\\theta_E, e)) - f(e))^2\\right]\n$$</p>\n<p>Once we learn the best parameters $\\theta_E^\\star$, $\\theta_D^\\star$, $\\theta_V^\\star$, finding the best expression $e$ that maximizes $f(e)$ can be done by:</p>\n<ul>\n<li>find the vector $\\mathbf{x}^\\star$ that maximizes $V(\\theta_V^\\star, \\mathbf{x})$;</li>\n<li>decode it to find the optimal expression associated with it: $e^\\star = D(\\theta_D^\\star, \\mathbf{x}^\\star)$.</li>\n</ul>\n<p>This will probably not be an exact solution, but hopefully one that is good enough.</p>\n<h3><a href=\"#problems-and-conclusions\" aria-hidden=\"true\" class=\"anchor\" id=\"problems-and-conclusions\"></a>Problems and conclusions</h3>\n<p>That's a nice story, but is it actually possible to do? That's a great question. I have no idea. I tried attacking the regex problem with this idea before and I hit several brick walls (that mainly stem from my lack of knowledge in symbolic computation since my focus is 100% in ML and not in Computer Science in general).</p>\n<p>The first difficulty is how to define the parametric family of encoders $E(\\theta, e)$. Those functions must take values in the set of possible expressions $\\mathrm{Exp}_G$ (the set of all regular expressions in our toy example). We could work with a string representation of the expression and use whatever fancy architecture kids are using for processing strings nowadays. But it would be nice to be able to feed the Abstract Syntax Tree (AST) of the expression and process this. I wonder if that would add significant amounts of inductive bias about the problem and help the model to learn. I tried using TreeLSTMs<sup class=\"footnote-ref\"><a href=\"#fn-treelstm\" id=\"fnref-treelstm\" data-footnote-ref>4</a></sup> for this problem the past and the result was interesting. So I think this part is really feasible.</p>\n<p>The parametric family of decoders $D(\\theta_D, \\mathbf{x})$ is a much worse problem though. Those functions have to produce syntatically correct expressions in order for the optimization to make sense. Using simple string generation neural nets to produce a string representation of the expression will probably not garantee syntatically valid expressions for every latent vector we input. One possibility is to build it as a generative process that produces valid ASTs recursively. That can work but on my past tries I had a lot of trouble with performance.</p>\n<p>Also randomly generating trees is not easy and naive processes tend to produce trees with a very heavily tailed depth distribution. In my experiments frequently my decode would generate mostly very shallow ASTs but every once in a while it would generate a monstrously deep AST. Manually clipping the maximum depth didn't work very well for me, but I might have done something dumb.</p>\n<p>So, I didn't reach the point where I could test if this idea would work at all, but that looks like a promising idea for someone with actual time to test it.</p>\n<h3><a href=\"#references-and-footnotes\" aria-hidden=\"true\" class=\"anchor\" id=\"references-and-footnotes\"></a>References and Footnotes</h3>\n<section class=\"footnotes\" data-footnotes>\n<ol>\n<li id=\"fn-idris\">\n<p>Alternatively, some other strategies like <a href=\"https://www.idris-lang.org/courses/MGS2018/idris-mgs4.pdf\">type driven development</a> in <a href=\"https://www.idris-lang.org/\">Idris</a> use information about typing and dependently typed proofs to suggest code by case analysis. Given a strong enough type constraint Idris is able to generate the desired code for a function. Perhaps there's a way to marry this with symbolic manipulation of ASTs as well. <a href=\"#fnref-idris\" class=\"footnote-backref\" data-footnote-backref data-footnote-backref-idx=\"1\" aria-label=\"Back to reference 1\">↩</a></p>\n</li>\n<li id=\"fn-naopaper\">\n<p>Luo, R., Tian, F., Qin, T., Chen, E., & Liu, T. Y. (2018). Neural architecture optimization. <a href=\"https://arxiv.org/abs/1808.07233\"><em>arXiv preprint arXiv:1808.07233</em></a>. <a href=\"#fnref-naopaper\" class=\"footnote-backref\" data-footnote-backref data-footnote-backref-idx=\"2\" aria-label=\"Back to reference 2\">↩</a></p>\n</li>\n<li id=\"fn-granted\">\n<p>Granted that we are able to find a distance function between expressions that can provide continuous gradients with respect to $\\theta_D$ and $\\theta_E$. <a href=\"#fnref-granted\" class=\"footnote-backref\" data-footnote-backref data-footnote-backref-idx=\"3\" aria-label=\"Back to reference 3\">↩</a></p>\n</li>\n<li id=\"fn-treelstm\">\n<p>Tai, K. S., Socher, R., & Manning, C. D. (2015). Improved semantic representations from tree-structured long short-term memory networks. <a href=\"https://arxiv.org/abs/1503.00075\"><em>arXiv preprint arXiv:1503.00075</em></a>. <a href=\"#fnref-treelstm\" class=\"footnote-backref\" data-footnote-backref data-footnote-backref-idx=\"4\" aria-label=\"Back to reference 4\">↩</a></p>\n</li>\n</ol>\n</section>\n",
"summary": "",
"date_published": "2021-07-02T00:00:00-00:00",
"image": "",
"authors": [
{
"name": "Rafael S. Calsaverini",
"url": "",
"avatar": ""
}
],
"tags": [
"Machine Learning",
"Neural Networks",
"Deep Learning",
"Symbolic Optimization"
],
"language": "en"
},
{
"id": "https://rcalsaverini.github.io/blog/transforming-modes.html",
"url": "https://rcalsaverini.github.io/blog/transforming-modes.html",
"title": "Transforming modes",
"content_html": "<h2><a href=\"#intro\" aria-hidden=\"true\" class=\"anchor\" id=\"intro\"></a>Intro</h2>\n<p>I want to continue the discussion in the <a href=\"blog-negative-harmony-inverts-brightness-modes.html\">last post</a> about transformations of modes and sets of tones. I want to explore the following question: what are transformations that make sense and what are they effect on scales, tone sets and modes.</p>\n<h3><a href=\"#the-transformations\" aria-hidden=\"true\" class=\"anchor\" id=\"the-transformations\"></a>The Transformations</h3>\n<p>I want to get back to the circle of fifths and define a few transformations and analise their implications. The transformations that I'm interested in are based in the idea of the negative harmony. As defined in the previous post, the negative harmony in a given key is obtained by reflecting the circle of fifths along an axis midway between the key and the its fifth. For example, here's a diagram of the negative harmony in the key of C:</p>\n<p>{{<tikz>}}\n\\begin{tikzpicture}[auto,node distance=2.5cm, block/.style={color=black, align=center, minimum height=1cm, minimum width=1.5cm}, vec/.style={thick,color=black!50}]</p>\n<pre><code>\\def \\n {12}\n\\def \\radius {4 cm}\n\\def \\margin {7}\n\n\\def \\notes {1,...,12}\n\n\\foreach \\s in \\notes {\n \\def \\start {360 * \\s / \\n + 15}\n \\def \\end {360 * (\\s - 1) / \\n + 15}\n\n \\node[block, circle, color=violet] at ({90-\\start}:\\radius) (a\\s) {\\Large $X_{\\s}$};\n \\draw[vec] ({\\end+\\margin}:\\radius) arc ({\\end+\\margin}:{\\start-\\margin}:\\radius);\n}\n\n\n\\foreach \\x in {1, 2, ..., 6} {\n \\def \\a {\\x * 30 - 30}\n \\draw [thick,dashed,color=violet!50] (\\a:-\\radius-20) -- (\\a:\\radius + 20)\n node[above,sloped,color=purple] at (\\a:\\radius + 20) {$\\mathcal{N}_{\\x}$};\n}\n</code></pre>\n<p>\\end{tikzpicture}</p>\n<p>{{</tikz>}}</p>\n<p>We will identify those transformations by $N_{x}$ where $x \\in {C, D\\flat, \\ldots, B\\flat, B}$</p>\n",
"summary": "",
"date_published": "2021-02-22T00:00:00-00:00",
"image": "",
"authors": [
{
"name": "Rafael S. Calsaverini",
"url": "",
"avatar": ""
}
],
"tags": [
"Music Theory",
"Negative Harmony",
"Set Theory (Music)",
"Neo-Riemannian Theory",
"Chord transformations"
],
"language": "en"
},
{
"id": "https://rcalsaverini.github.io/blog/negative-harmony-inverts-brightness-modes.html",
"url": "https://rcalsaverini.github.io/blog/negative-harmony-inverts-brightness-modes.html",
"title": "Negative Harmony inverts brightness of modes",
"content_html": "<h2><a href=\"#intro\" aria-hidden=\"true\" class=\"anchor\" id=\"intro\"></a>Intro</h2>\n<p>Recently I've been listening to a <a href=\"https://www.youtube.com/watch?v=SF8CdxcdJgw\">12tone video</a> on YouTube about negative harmony, a concept recently popularized by musician Jacob Collier. On the related links I found a bunch of videos from <a href=\"https://www.youtube.com/channel/UCurOAVtqb7kM1siNlDynzFw\">this channel</a> with "negative harmony" versions of many popular songs.</p>\n<p>The change in sonority of those songs clearly indicated for me a change in the <em>mode</em> of the song, which kind of go against the grain of what I've been told about the action of those transformations. In this post I want to explore negative harmony as a transformation not only on chords but on scales, modes and melody.</p>\n<h3><a href=\"#what-is-negative-harmony-anyway\" aria-hidden=\"true\" class=\"anchor\" id=\"what-is-negative-harmony-anyway\"></a>What is Negative Harmony anyway?</h3>\n<p>There are many ways to understand negative harmony and I'm not going to pretend I'm able to give a full historical background. The video by the 12tone channel that I linked above does a much better job than I ever could. Here in this post I'm mainly interested in this as a transformation that can be applied to a particular set of notes, and that's how I'm going to describe and treat it.</p>\n<p>To understand what's the transformation being applied, consider the circle of fifths. In the key of C, the negative harmony transformation consists in swapping notes along the axis that cut the circle in half between the C and G.</p>\n<center>\n<figure>\n <object data=/media/negative_harm_C.svg width=\"320\">\n </object>\n <figcaption>\n The negative harmony transformation visualized in the Circle of Fifths\n </figcaption>\n</figure>\n</center>\n<p>The arrows above indicate the notes that are to be switched. So, to apply the negative harmony transformation in the key of C, one would change C for G, D for F, etc.</p>\n<h3><a href=\"#parameterizing-negative-harmony\" aria-hidden=\"true\" class=\"anchor\" id=\"parameterizing-negative-harmony\"></a>Parameterizing Negative Harmony</h3>\n<p>One aspect that is not often discussed about this transformation is that it is actually a <strong>family of transformations</strong> parameterized by a key center. Note that the reflection axis chosen above is only one chosen from 12 possibilities. To highlight this, notice that in the diagram above the transformation in the key of C takes F to D. In the diagram below we have transformation in the key of A, showing that in this case it takes F to A♭.</p>\n<center>\n<figure>\n <object data=/media/negative_harm_A.svg width=\"320\">\n </object>\n <figcaption>\n The circle of fifths highlighting the negative harmony transformation in the key of A.\n </figcaption>\n</figure>\n</center>\n<h2><a href=\"#transforming-modes\" aria-hidden=\"true\" class=\"anchor\" id=\"transforming-modes\"></a>Transforming modes</h2>\n<p>Typically negative harmony is discussed <a href=\"https://www.brltheory.com/resources/negative-harmony-chord-chart/\">in the context of chords</a>, with an expectation that transformed chords has similar functions (having "equivalent tonal gravity"). I want to discuss how this transformation behaves when considering melodic elements, scales and modes.</p>\n<p>To start, let's check what happens when we transform the seven modes of the major scale. For example, let's apply the transformation over the major scale. As an ilustration, the sequence of notes [C, D, E, F, G, A, B] (the Ionian mode of C Major), transformed in the key of C, will result in [G, F, E♭, D, C, B♭, A♭].</p>\n<p>This sequence can be interpreted in a lot of different ways. Harmonically it is typical to consider the following argument. If the original harmony is in the key of C major, than the I chord is C major triad (C, E, G). This triad transforms to (G, E♭, C), which is an inversion of the C minor triad. Since this would also fit the role of the I chord in the new harmony, this should be interpreted as transforming from a C major harmony to a C minor one.</p>\n<p>That's a good argument, but if we focus on the melody, the note that would be treated as the focus and resting place of the melody in the original key would be C, which would turn into G in the new melody. So, we could interpret G as the root note of the transformed sequence, which would make it a G Phrygian melody.</p>\n<p>Let's take this second stance and see what happens with all modes. Under this interpretation this is how the modes transform:</p>\n<ul>\n<li>C Ionian transforms into G Phrygian.</li>\n<li>C Dorian trasforms into G Dorian.</li>\n<li>C Phrygian trasforms into G Ionian.</li>\n<li>C Lydian transforms into G Locrian.</li>\n<li>C Mixolydian trasforms into G Aelian.</li>\n<li>C Aeolian trasforms into G Mixolydian.</li>\n<li>C Locrian transforms into G Lydian.</li>\n</ul>\n<h3><a href=\"#negative-harmony-inverts-brightness\" aria-hidden=\"true\" class=\"anchor\" id=\"negative-harmony-inverts-brightness\"></a>Negative Harmony inverts brightness</h3>\n<p>Finally, here's the neat and interesting pattern to notice: if we ignore the roots, the quality of the modes above is transforming up and down the <a href=\"https://www.youtube.com/watch?v=9rEqrPwVITY\">Brightness Scale</a>.</p>\n<center>\n<figure>\n <object data=media/brightness_scale.svg width=\"320\">\n </object>\n <figcaption>\n Brightness scale highlighting the negative harmony transformation.\n </figcaption>\n</figure>\n</center>\n<p>The effect of the transformation is to reflect the qualities of the modes around the center of the brightness scale, inverting the value of the brightness for the mode in question (the brightest mode becomes the darkest, the second brightest becomes the second darkest, etc).</p>\n<h2><a href=\"#so-what\" aria-hidden=\"true\" class=\"anchor\" id=\"so-what\"></a>So what?</h2>\n<p>Yes, this is just a simple neat symmetry I found. I intend to write more later on some other questions:</p>\n<ul>\n<li>What happens when you transform modes of other scales?</li>\n<li>Modes of the major scale are closed under this transformation. But this definitely won't happen always. What does it mean when it happens?</li>\n<li>What happens when you transform modes under the negative harmony centered in other keys?</li>\n<li>Is there a "right" key to use?</li>\n</ul>\n<p>Stay tuned.</p>\n",
"summary": "",
"date_published": "2021-02-20T00:00:00-00:00",
"image": "",
"authors": [
{
"name": "Rafael S. Calsaverini",
"url": "",
"avatar": ""
}
],
"tags": [
"Music Theory",
"Negative Harmony",
"Set Theory (Music)"
],
"language": "en"
},
{
"id": "https://rcalsaverini.github.io/blog/boehm-beraducci-encoding-for-trees-in-python-a-preview.html",
"url": "https://rcalsaverini.github.io/blog/boehm-beraducci-encoding-for-trees-in-python-a-preview.html",
"title": "Boehm-Beraducci encoding for trees in python - a preview",
"content_html": "<p>A few years ago I was very impressed for learning the <a href=\"http://okmij.org/ftp/tagless-final/course/Boehm-Berarducci.html\">Boehm-Berarducci encoding</a>, which is a way for encoding\n<a href=\"https://en.wikipedia.org/wiki/Algebraic_data_type\">Algebraic Data Types</a> (ADTs) into a kind of <a href=\"https://en.wikipedia.org/wiki/Lambda_calculus\">lambda calculus</a> that is well <a href=\"https://en.wikipedia.org/wiki/Typed_lambda_calculus\">typed</a> called <a href=\"https://en.wikipedia.org/wiki/System_F\">System F</a>.\nThe first thing I asked myself was in which languages I would be able to use this encoding to represent ADTs,\nwith python being the most critical one for me.</p>\n<p>I was specially motivated for going back at trying this in Python after [I became very frustrated] with my n-th attempt at\nactually using <a href=\"http://mypy-lang.org/\">mypy</a> as a static type checker. Using Boehm-Berarducci encodings certainly will avoid some difficulties\nwith recursive types, but I don't think it will solve everything (specially my problems with higher kinded types\nand generic tuples). Aditionaly, I'm not certain about how efficient this implementation would be (both in space and time complexity)\nin a language without the facilities of modern and efficient functional compiler like GHC (tail-rec optimization, etc).</p>\n<p>That said, it's a lot of fun to code this, and I plan to explore this in future posts. As an appetizer, here's a simple tree type\nthat typechecks correctly using <a href=\"http://mypy-lang.org/\">mypy</a> in Python, with smart constructors for leafs and branches:</p>\n<pre><code class=\"language-python\">\n from typing import NamedTuple, TypeVar, Callable\n\n A = TypeVar("A")\n R = TypeVar("R")\n Branch = Callable[[R, R], R]\n\n\n class BinaryTree(NamedTuple):\n constructor: Callable[[R, Branch[R]], R]\n\n def __call__(self, leaf: R, branch: Branch[R]):\n return self.constructor(leaf, branch)\n\n @classmethod\n def leaf(cls):\n def leafer(leaf: R, branch: Branch[R]) -> R:\n return leaf\n\n return cls(leafer)\n\n @classmethod\n def branch(cls, left, right):\n def brancher(leaf: R, branch: Branch[R]) -> R:\n return branch(left, right)\n\n return cls(brancher)\n</code></pre>\n<p>I'll still be checking it this is the actually usable in real code and it certainly falls short from the elegance\nand terseness of a Haskell implementation. But compared with the typical Python code it's actually not that bad.</p>\n<p>I'll be posting here any progress I have with this.</p>\n<p>[I became very frustrated]: {{< relref "blog/2019-01-20-frustrations-with-mypy.md" >}}</p>\n",
"summary": "",
"date_published": "2019-01-21T00:00:00-00:00",
"image": "",
"authors": [
{
"name": "Rafael Calsaverini",
"url": "https://bertha.social/@rcalsaverini",
"avatar": "https://github.com/rcalsaverini.png"
}
],
"tags": [
"programming",
"Python",
"Functional programming",
"Type encodings"
],
"language": "en"
},
{
"id": "https://rcalsaverini.github.io/blog/a-few-frustrations-with-python-s-type-annotation-system.html",
"url": "https://rcalsaverini.github.io/blog/a-few-frustrations-with-python-s-type-annotation-system.html",
"title": "A few frustrations with Python's type annotation system",
"content_html": "<p>I have on and off again tried to use <a href=\"http://mypy-lang.org/\">mypy</a> to type check my python code, but some shortcomings of Python's type annotation system really get in the way. This came now because I needed to write code involving trees that had to change the types of values stored on the nodes. This highlighted a few serious shortcomings for anyone that is accostumed to use stronger type systems.</p>\n<h3><a href=\"#the-ugly-syntax-for-function-types-is-annoying-but-there-are-worse-problems\" aria-hidden=\"true\" class=\"anchor\" id=\"the-ugly-syntax-for-function-types-is-annoying-but-there-are-worse-problems\"></a>The ugly syntax for function types is annoying but there are worse problems</h3>\n<p>Yes, writing <code>Callable[[Callable[[A], B], F[A]], F[B]]</code> instead of <code>(a -> b) -> f a -> f b</code> as in Haskell or <code>(A => B, F[A]) => F[B]</code> (or maybe the uncurried <code>(A => B) => (F[A] => F[B])</code> version) as in Scala is really annoying.</p>\n<p>But that's neither here nor there. One can get accostumed to it. On the other hand, it is certainly symptomatic of the philosophy chosen for the type system: passing functions around is not an idea on the forefront of this design.</p>\n<h3><a href=\"#people-are-not-using-it\" aria-hidden=\"true\" class=\"anchor\" id=\"people-are-not-using-it\"></a>People are not using it</h3>\n<p>In general, the overwhelming majority of the python libraries I use simply don't have type annotations or stub files and don't plan to add them in the near future. Writing stub files on your own is a pain. This by itself prevents the adoption of type annotations without a lot of effort in providing stub files yourself.</p>\n<h3><a href=\"#the-ad-hoc-polymorphism-mechanism-chosen-is-annoying\" aria-hidden=\"true\" class=\"anchor\" id=\"the-ad-hoc-polymorphism-mechanism-chosen-is-annoying\"></a>The ad hoc polymorphism mechanism chosen is annoying</h3>\n<p>The only way to do ad hoc polymorphism is with structural subtyping (using <code>Protocol</code>). This isn't so bad, since the language embraces duck typing so thoroughly. But it's somewhat annoying for two reasons:</p>\n<ol>\n<li>\n<p>First, admitedly a lesser problem, there's no clear indication in the code that a given class implements a particular <code>Protocol</code>. There's no explicit inheritance, nor explicit instancing of the <code>Protocol</code>. If you don't know the protocol exists, when you see the code of a class, you have no clue that there is a more general pattern that this class implements.</p>\n</li>\n<li>\n<p>Second, there's no "post facto" instancing of <code>Protocol</code> like it's possible to do with Haskell's or Scala's typeclasses, or Go's interfaces. You have one chance to instanciate a class as a particular Protocol: when you write that classes code. If the class belongs to a third party library you can't change, you have to write wrappers (which are terribly annoying, because the language offers no syntax sugar for them).</p>\n</li>\n</ol>\n<h3><a href=\"#no-support-for-lightweight-parametrically-polymorphic-product-types\" aria-hidden=\"true\" class=\"anchor\" id=\"no-support-for-lightweight-parametrically-polymorphic-product-types\"></a>No support for lightweight parametrically polymorphic product types</h3>\n<p>Python's type annotations don't allow you to write generic named tuples. This prevents one to write very lightweight types like:</p>\n<pre><code class=\"language-python\">class Foo(NamedTuple, Generic[A]):\n a_value: A\n a_list: List[A]\n</code></pre>\n<p>If you want a parametrically polymorphic type, it must be a fully fledged class by itself.</p>\n<pre><code class=\"language-python\">### will type check\nclass Foo(Generic[A]):\n def __init__(self, a_value: A, a_list: List[A]):\n self.a_value = a_value\n self.a_list = a_list\n</code></pre>\n<p>This discourages me to use it for many applications, since Python's classes are not exactly lightweight things and I'd rather not have a class if I don't really need one. You could use a type synonym for an untagged tuple, but this would be a serious documentation hazard:</p>\n<pre><code class=\"language-python\">Foo = Tuple[A, List[A]]\n</code></pre>\n<h3><a href=\"#no-support-for-lightweight-recursive-product-types\" aria-hidden=\"true\" class=\"anchor\" id=\"no-support-for-lightweight-recursive-product-types\"></a>No support for lightweight recursive product types</h3>\n<p>Similarly, Python's type annotations don't allow recursive types unless you're dealing with a full fledged class. Recursive <code>NamedTuples</code> are forbidden, and so are recursive <code>Union</code>s (which wouldn't be possible given the restriction on higher-kinded types anyway, see below). This further prevents fast and lightweight types like:</p>\n<pre><code class=\"language-python\">class BinaryTree(NamedTuple):\n left: "BinaryTree"\n right: "BinaryTree"\n</code></pre>\n<p>and requires you to use the full (and heavy) Python classes:</p>\n<pre><code class=\"language-python\">class BinaryTree(object):\n def __init__(self, left: "BinaryTree", right: "BinaryTree"):\n self.left = left\n self.right = right\n</code></pre>\n<h3><a href=\"#no-higher-kinded-types\" aria-hidden=\"true\" class=\"anchor\" id=\"no-higher-kinded-types\"></a>No higher kinded types</h3>\n<p>Python's type annotations have no support for <a href=\"https://stackoverflow.com/questions/6246719/what-is-a-higher-kinded-type-in-scala\">higher-kinded</a> types. All type variables in a class that inherit from Generic must be of kind <code>*</code>. This is kind of a catastrophe for any kind of more advanced use of the type system to improve correctness garantees. It also prevents some uses of higher kinded patterns like functors, monads, etc.</p>\n<p>For example, you can't use the finally tagless or <a href=\"http://okmij.org/ftp/tagless-final/index.html\">tagless-final</a> pattern. At least not like this:</p>\n<pre><code class=\"language-python\">class DataAccessMonad(Generic[M]):\n def get_user(self, user_id: UserId) -> M[User]\n pass\n</code></pre>\n<p>Also, this makes it difficult to implement "post-facto" ad hoc polymorphism using something like <a href=\"https://typelevel.org/cats/typeclasses.html\">Scala's typeclass instance</a> mechanisms to escape <code>Protocol</code>s. For this, one would need to write:</p>\n<pre><code class=\"language-python\">class FunctorInstance(Generic[F]):\n\n @staticmethod\n def map(value: F[A], function: Callable[[A], B]) -> F[B]:\n pass\n</code></pre>\n<p>My original plan for a type class library involved creating a way to inject the instance, wrap the <code>F[A]</code> value and monkey patch it to call <code>value.map</code> when you need it. The fact that one can't use higher kinded types prevents the code above to type check.</p>\n<h3><a href=\"#and-so-no-fixed-point-functors-and-other-niceties\" aria-hidden=\"true\" class=\"anchor\" id=\"and-so-no-fixed-point-functors-and-other-niceties\"></a>And so... no fixed point functors and other niceties</h3>\n<p>This means also that you can't use fixed point types like:</p>\n<pre><code class=\"language-python\">class CoFree(NamedTuple, Generic[F, A]):\n value: A\n continuations: F["Cofree[F, A]"]\n</code></pre>\n<p>because this requires F to be of kind <code>* -> *</code>. Fixed point types are awfully useful for dealing with tree-like structures (see for example <a href=\"https://www.youtube.com/watch?v=7xSfLPD6tiQ\">this talk from Rob Norris</a>) and would similarly fail to type check on <a href=\"http://mypy-lang.org/\">mypy</a>.</p>\n<h3><a href=\"#conclusion\" aria-hidden=\"true\" class=\"anchor\" id=\"conclusion\"></a>Conclusion</h3>\n<p>There are more problems, but those are the main ones that prevented me from really using <a href=\"http://mypy-lang.org/\">mypy</a> or type annotations in Python. This haven't prevented me from writing good and useful Python code, and I still love to write Python. But it certainly increases the attrition.</p>\n",
"summary": "",
"date_published": "2019-01-20T22:00:00-00:00",
"image": "",
"authors": [
{
"name": "Rafael Calsaverini",
"url": "https://bertha.social/@rcalsaverini",
"avatar": "https://github.com/rcalsaverini.png"
}
],
"tags": [
"Python",
"Type safety",
"Mypy",
"programming"
],
"language": "en"
},
{
"id": "https://rcalsaverini.github.io/blog/type-safe-records-as-an-excuse-to-learn-type-level-programming-in-haskell.html",
"url": "https://rcalsaverini.github.io/blog/type-safe-records-as-an-excuse-to-learn-type-level-programming-in-haskell.html",
"title": "Type safe records as an excuse to learn type level programming in Haskell",
"content_html": "<p>I've been recently trying to learn more advanced type-level constructs in Haskell and was very happy to find <a href=\"https://www.youtube.com/watch?v=wNa3MMbhwS4\">this amazing talk</a> by <a href=\"http://www.seas.upenn.edu/~sweirich/\">Prof. Stephanie Weirich</a> about Dependent Types in haskell. This talk helped me to understand deeper a few more recent concepts introduced by some of GHC's extensions and how to use them. In this post I want to focus a little bit in a simplified version of one of the data structures Prof. Weirich uses in her talk. She does a lot more than this, in the talk, but I'm going slowly to understand every bit of it.</p>\n<h3><a href=\"#type-safe-records\" aria-hidden=\"true\" class=\"anchor\" id=\"type-safe-records\"></a>Type Safe Records</h3>\n<p>The <em>record problem</em> is an old problem in Haskell. Succintly, Haskell's traditional native records have lots of problems -- you couldn't reuse record names, updating record fields lead to dull boilerplate code, etc. Many of those problems <a href=\"http://www.parsonsmatt.org/overcoming-records/#/\">are attacked</a> by the idea of <a href=\"https://hackage.haskell.org/package/lens\">lenses</a> (see this <a href=\"https://skillsmatter.com/skillscasts/4251-lenses-compositional-data-access-and-manipulation\">talk by Simon Peyton Jones</a> to get the basics of it) and many <a href=\"http://hackage.haskell.org/packages/#cat:Records\">other libraries</a> as well as the <a href=\"https://hackage.haskell.org/package/base-4.10.1.0/docs/GHC-Records.html\">OverloadedRecordFields</a> extension.</p>\n<p>Though there are many solutions attacking parts of the record problem, there's one particular aspect of it which offers a nice opportunity to learn type level programming techniques in Haskel and are worth working out from scratch: how to create records whose type's are aware of the fields contained in the records and their types?</p>\n<p>That means, how to create a record type such that the when we try to access a non-existing field:</p>\n<pre><code class=\"language-haskell\">> getField "nonExistentFieldName" record\n</code></pre>\n<p>we get an actual type error in compile time. This allows us to completely rule out a whole class of bugs from our programs: we don't need to worry about users acessing unexisting fields type of errors because this code wouldn't even compile.</p>\n<h3><a href=\"#first-attempt-a-list-of-named-entries\" aria-hidden=\"true\" class=\"anchor\" id=\"first-attempt-a-list-of-named-entries\"></a>First attempt: a list of named entries</h3>\n<p>Our first attempt will be to model our records as lists of named-entries:</p>\n<pre><code class=\"language-haskell\">data Entry a = Entry String a\ndata Dict a = Nil | Cons (Entry a) (Dict a)\n\ngetField :: String -> Dict a -> Maybe a\ngetField _ Nil = Nothing\ngetField name (Cons (Entry name' x) dict') = case (name == name') of\n True -> Just x\n False -> getField name dict'\n</code></pre>\n<p>This compiles alright, but it's not a solution to our problem. First of all, it has no information about the entry field names in the type. The type of <code>Dict a</code> only carries information about the type of the values. Second, all fields must be of the same type. If you try to build something like: ''</p>\n<pre><code class=\"language-haskell\">-- this raises a type error\nmyRecord = Cons (Entry "name" "Rafael") (Cons (Entry "age" (35::Int)) Nil)\n</code></pre>\n<p>You'll get an obvious type error since <code>Cons (Entry "age" 35::Int) Nil</code> is a <code>Dict Int</code> and <code>Entry "name" "Rafael"</code> is an <code>Entry String</code>, and <code>Cons</code> type signature is <code>Entry a -> Dict a -> Dict a</code>.</p>\n<p>So, it seems that this is not a very useful record (:P).</p>\n<p>Let's try to solve the second problem first and make the type of each entry more flexible. For that we need GADTs and existential types.</p>\n<h3><a href=\"#using-gadts-and-existential-types\" aria-hidden=\"true\" class=\"anchor\" id=\"using-gadts-and-existential-types\"></a>Using GADTs and existential types</h3>\n<p>The second problem is caused by the fact the we have a explicit reference to the type of the entry in the <code>Dict</code> type constructor. We could try to make it more flexible like this:</p>\n<pre><code class=\"language-haskell\">data Dict a = Nil | Cons (Entry a) (Dict b)\n</code></pre>\n<p>But of course this doesn't work because the type variable <code>b</code> is not defined in this scope. There is no way for the type checker to fix it:</p>\n<pre><code class=\"language-ghci\">/.../Post.hs:5:42: error:\n Not in scope: type variable ‘b’\n |\n5 | data Dict a = Nil | Cons (Entry a) (Dict b)\n</code></pre>\n<p>For this to work, we need to put <code>b</code> in scope, without adding it as argument to the type constructor or else we'd get an infinite regress of types (I'll get back to this later). For that we need two GHC extensions: <code>GADTs</code> and <code>Rank2Types</code> (or <code>RankNTypes</code>, or other extension providing the <code>forall</code> keyword).</p>\n<p><code>GADTs</code> is an extension that allows us to give more generic types to the data constructors of an algebraic data type. It also allows a nicer syntax for data constructors with a long type signature. With <code>GADTs</code> and <code>RankNTypes</code> enabled we can do this:</p>\n<pre><code class=\"language-haskell\">{-# LANGUAGE GADTs, RankNTypes #-}\n\ndata Entry a = Entry String a\ndata Dict a = Nil | forall b . Cons (Entry a) (Dict b)\n</code></pre>\n<p>This compiles correctly and we can try to use it! Now our previous record is well typed:</p>\n<pre><code class=\"language-haskell\">myRecord :: Dict String\nmyRecord = Cons (Entry "name" "Rafael") (Cons (Entry "age" (35::Int)) Nil)\n</code></pre>\n<p>But look what happened. The information that there's an <code>Int</code> somewhere inside the structure of the record is gone! Yep. We enclosed it in a <code>forall</code> and all the information <code>Cons</code> have now is that its second argument is some kind of <code>Dict b</code>, whatever <code>b</code> is. This doesn't look like a good sign.</p>\n<p>Let's try to write a <code>getField</code> function. We still didn't solve the problem of letting the type know what fields are possible, so we still need to guard ourselves against the possibility that the user will try to fetch the data from a field that doesn't exist. So the signature of <code>getField</code> still is <code>String -> Dict a -> Maybe</code>... wait a minute! What's the return type?</p>\n<p>In the record above, if the field name is <code>"name"</code> it should return a <code>String</code>, but if the field name is <code>"age"</code> it should return an <code>Int</code>. But the compiler wouldn't know that because there's no information in the type of the record about the value of the fields in is tail. We only have information about the type of the head entry.</p>\n<p>So, the return type of <code>getField</code> is something like <code>(forall b . Maybe b)</code>? That doesn't look very useful. I can retrieve the value but I loose all the information about its type! This doesn't seem to be working...</p>\n<h3><a href=\"#keeping-track-of-the-field-types\" aria-hidden=\"true\" class=\"anchor\" id=\"keeping-track-of-the-field-types\"></a>Keeping track of the field types</h3>\n<p>I want to get back to "infinite regress of types" I refered above. Why couldn't we put the <code>b</code> type variable above as an argument for the type constructor? Well, let's try and see. We could create a data type <code>Dict a b</code> where <code>a</code> is the value of the head <code>Entry</code> and <code>b</code> is the type of the head of the next <code>Dict</code> down the <code>Cons</code> data constructor. So:</p>\n<pre><code class=\"language-haskell\">data Dict a b = Nil | Cons (Entry a) (Dict b ???)\n</code></pre>\n<p>Oops. Damn, what about the type of the entry after the next entry? Well... Let's put it in the constructor too:</p>\n<pre><code class=\"language-haskell\">data Dict a b c = Nil | Cons (Entry a) (Dict b c ???)\n</code></pre>\n<p>You get it, right? There's always a new type to keep track of. The type of the record must know not only the type of the head entry, but also all the types of all entries in its tail. This looks a hell like a linked list of types, doesn't it? If we had a way to create <strong>a type level list</strong> we could have the following GADT:</p>\n<pre><code class=\"language-haskell\">data Dict (types :: (TypeLevelList Type)) where\n Nil :: Dict TypeLevelEmptyList\n Cons :: (Entry a) -> Dict (tail :: TypeLevelList) -> Dict (a `TypeLevelCons` tail)\n</code></pre>\n<p>Wait, what the hell is this? First of all, what are those type signatures in the wrong place? Those are <em>kind signatures</em>. Kind is the "type of a type constructor". For example, type constructors that have no parameters, like <code>Bool</code> or <code>String</code> have kind <code>Type</code> (or <code>*</code>). Type constructors that take a single parameter, like <code>Maybe</code> have kind <code>Type -> Type</code>. Single parameter Typeclasses like <code>Functor</code> or <code>Monad</code> have kind <code>Type -> Constraint</code>, etc.</p>\n<p>Here I'm supposing that there exists a kind called <code>TypeLevelList</code>, and that there exists two type constructors:</p>\n<ul>\n<li><code>TypeLevelEmptyList</code> with kind <code>TypeLevelList</code>,</li>\n<li><code>TypeLevelCons</code> with kind <code>Type -> TypeLevelList -> TypeLevelList</code>.</li>\n</ul>\n<p>When I write <code>data Dict (types :: TypeLevelList)</code> I'm declaring a type constructor <code>Dict</code>with kind <code>TypeLevelList -> Type</code>. This type has two data constructors:</p>\n<ul>\n<li><code>Nil</code> which is just an empty record with type <code>Dict TypeLevelEmptyList</code></li>\n<li><code>Cons</code> which takes an <code>Entry a</code> and a <code>Dict TypeLevelList</code> and return another <code>Dict TypeLevelList</code> putting <code>a</code> on the head of that <code>TypeLevelList</code> it received.</li>\n</ul>\n<p>In practice we'd have something like this:</p>\n<pre><code class=\"language-haskell\">emptyRecord :: Dict TypeLevelEmptyList\nemptyRecord = Nil\n\nagedRecord :: Dict (Int `TypeLevelCons` TypeLevelEmptyList)\nagedRecord = Cons (Entry "age" 35) emptyRecord\n\nrecordWithAStringAndAnInt :: Dict (String `TypeLevelCons` Int `TypeLevelCons` TypeLevelEmptyList)\nnamedAndAgedRecord = Cons (Entry "name" "Rafael") agedRecord\n</code></pre>\n<p>This is sweet! We can keep track to the types of all fields! But how do we create those type level lists? :O</p>\n<h3><a href=\"#type-level-lists\" aria-hidden=\"true\" class=\"anchor\" id=\"type-level-lists\"></a>Type Level Lists</h3>\n<p>To create those type level lists we have to use a GHC extension called <code>DataKinds</code>. To understand what <code>DataKinds</code> do lets consider this simple type declaration:</p>\n<pre><code class=\"language-haskell\">data Nat = Zero | Succ Nat\n</code></pre>\n<p>What this does is to create a type constructor called <code>Nat</code>, whose kind is <code>Type</code>, and two data constructors:</p>\n<ul>\n<li><code>Zero</code>, whose type is <code>Nat</code></li>\n<li><code>Succ</code>, whose type is <code>Nat -> Nat</code></li>\n</ul>\n<p>When you use the <code>DataKinds</code> extension this declaration creates, besides the three objects described above, three more objects:</p>\n<ul>\n<li>a <strong>"kind constructor"</strong> <code>'Nat</code> (the tick is not a typo)</li>\n<li>a <strong>type constructor</strong> <code>'Zero</code> whose <strong>kind</strong> is <code>'Nat</code></li>\n<li>a <strong>type constructor</strong> <code>'Succ</code> whose <strong>kind</strong> is <code>'Nat -> 'Nat</code></li>\n</ul>\n<p>Those types constructed with those type constructors are not inhabited by values, but they are very useful for <strong>type computation</strong>. So, how do we create the "kind constructor" <code>TypeLevelList</code> with type constructor <code>TypeLevelEmptyList</code> and <code>TypeLevelCons</code>? Exactly with the same code that we would use to create a type constructor <code>List</code> with data constructor <code>EmptyList</code> and <code>Cons</code>, but we use the <code>DataKinds</code> extension to lift those objects from the <code>value :: type</code> world to the <code>type :: kind</code> world. We can do:</p>\n<pre><code class=\"language-haskell\">{-# LANGUAGE GADTs, RankNTypes, DataKinds, KindSignatures #-}\n\nmodule Post where\n\nimport Data.Kind (Type)\n\ndata Entry a = Entry String a\n\ndata List a = EmptyList | ListCons a (List a)\n\ndata Dict (a :: (List Type)) where\n Nil :: Dict 'EmptyList\n Cons :: Entry a -> Dict t -> Dict ('ListCons a t)\n</code></pre>\n<p>So, what's happening here? First of all we have the declaration <code>data List a = EmptyList | ListCons a (List a)</code>. This is a simple <em>list type</em>, but since we used the <code>Data.Kinds</code> extension, we get a new <strong>list kind</strong> for free:</p>\n<ul>\n<li><code>'List</code> is a "kind constructor" which takes a kind and return another kind (<code>* -> *</code>)</li>\n<li><code>'EmptyList :: forall a . List a</code> is a type constructor</li>\n<li><code>'ListCons :: forall a . a -> List a -> List a</code> is another type constructor</li>\n</ul>\n<p>So, when applied to the kind <code>Type</code>, the "kind constructor" <code>'List</code> creates the kind <code>'List Type</code> which is a list of types! We can have the following types which have this kind:</p>\n<pre><code class=\"language-haskell\">'EmptyList\n'ListCons Int 'EmptyList\n'ListCons String (Cons Int 'EmptyList)\n</code></pre>\n<p>etc. All those types have kind <code>'List Type</code>. Those types are not inhabited (that is, we can't construct values that have those types), but we can used them to provide compile time information that helps us to avoid bugs, because we can build type constructors that build inhabited types out of them! For example, we can build <code>Dict</code>. Let's check the kind of <code>Dict</code> on GHCi:</p>\n<pre><code class=\"language-haskell\">> :k Dict\nDict :: List Type -> Type\n-- the actual GHCi output is Dict :: List * -> *, but Type is a nice synonym for *\n</code></pre>\n<p>This is what's happening with the declaration <code>data Dict (a :: (List Type))</code>. We used the extension <code>KindSignatures</code> to inform the compiler that the <code>Dict</code> type constructor has a kind which takes an argument of kind <code>List Type</code> and returns a regular <code>Type</code>.</p>\n<p>Now to the data constructors - which are the things that allows us to actually build values of type <code>Dict a</code>. The simples one is <code>Nil</code> which builds a value of type <code>Dict 'EmptyList</code>. This is an empty record, with no values stored and thus no types stored in the type level list.</p>\n<p>Also we have <code>Cons</code>, which takes a parameter of type <code>Entry a</code> and a parameter of type <code>Dict t</code> (remember, here <code>t</code> is a type of kind <code>'List Type</code>) and builds a value of type <code>Dict ('ListCons a t)</code>. So, <code>Cons</code> does two things:</p>\n<ul>\n<li>it concatenates a new entry with an existing record,</li>\n<li>it also concatenates the <em>type</em> of the value stored in this entry into an <em>existing list of types</em> that describes the types of the entries in the existing record.</li>\n</ul>\n<p>Wow. If that's too much to grasp, let's see this in action:</p>\n<pre><code class=\"language-haskell\">namedRecord ::Dict ('ListCons String 'EmptyList)\nnamedRecord = (Entry "name" "Rafael") Nil\n\nnamedAndAgedRecord :: Dict ('ListCons Int ('ListCons String 'EmptyList))\nnamedAndAgedRecord = Cons (Entry "age" (35::Int)) namedRecord\n</code></pre>\n<p>See how the types of the fields we're creating are concatenated in the type of the record? This allows us to know precisely the types of all the fields in a record!</p>\n<h3><a href=\"#making-it-prettier\" aria-hidden=\"true\" class=\"anchor\" id=\"making-it-prettier\"></a>Making it prettier</h3>\n<p>We didn't have to code our own list type, Haskell already provides one for us and fortunately <code>Data.Kinds</code> works with the built-in types too. So we could have written simply:</p>\n<pre><code class=\"language-haskell\">{-# LANGUAGE GADTs, RankNTypes, DataKinds, KindSignatures, TypeInType, TypeOperators #-}\n\nmodule Post where\n\nimport Data.Kind (Type)\n\ninfixr 6 :>\n\ndata Entry a = Entry String a\n\ndata Dict (a :: [Type]) where\n Nil :: Dict '[]\n (:>) :: Entry a -> Dict t -> Dict (a:t)\n</code></pre>\n<p>We made a few changes to make the types nicer:</p>\n<ul>\n<li>\n<p>We are now using Haskel's built-in lists:</p>\n<pre><code class=\"language-haskell\">> :k Dict\nDict :: [Type] -> Type\n</code></pre>\n<p>This is completely equivalent to the previous signature <code>List Type -> Type</code> the only difference is that we're using the built-in type instead of our custom list type.</p>\n</li>\n<li>\n<p>We're using the <code>TypeInType</code> extension to allow for the syntax <code>[Type]</code></p>\n</li>\n<li>\n<p>We're using the <code>TypeOperators</code> extension to allow for two things:</p>\n<ol>\n<li>using the promoted type constructor <code>(:) :: a -> [a] -> [a]</code> which concatenates a type on the head of a type level list;</li>\n<li>renaming the ugly <code>Cons</code> data constructor to a nicer <code>(:>)</code> infix type operator so that the expressions are nicer looking.</li>\n</ol>\n</li>\n</ul>\n<p>With this modifications, instead of this ugly monster:</p>\n<pre><code class=\"language-haskell\">myRecord :: Dict ('ListCons String ('ListCons Int 'EmptyList))\nmyRecord = Cons (Entry "name" "Rafael") (Cons (Entry "age" 35) Nil)\n</code></pre>\n<p>we can write this:</p>\n<pre><code class=\"language-haskell\">myRecord :: Dict '[String, Int]\nmyRecord = Entry "name" "Rafael" :> Entry "age" 35 :> Nil\n</code></pre>\n<p>Much better, right?</p>\n<h3><a href=\"#this-is-already-too-long-and-you-didnt-get-to-the-point-you-promised\" aria-hidden=\"true\" class=\"anchor\" id=\"this-is-already-too-long-and-you-didnt-get-to-the-point-you-promised\"></a>This is already too long and you didn't get to the point you promised</h3>\n<p>Well, yep. This post is already big and we still don't know:</p>\n<ul>\n<li>how to write <code>getField</code></li>\n<li>how to enhance the type <code>Dict</code> to allow for information about field names to be statically checked by the compiler.</li>\n</ul>\n<p>So it looks like a perfect point to stop and start planning to write Part 2!</p>\n",
"summary": "",
"date_published": "2018-02-12T22:00:00-00:00",
"image": "",
"authors": [
{
"name": "Rafael Calsaverini",
"url": "https://bertha.social/@rcalsaverini",
"avatar": "https://github.com/rcalsaverini.png"
}
],
"tags": [
"programming",
"Haskell",
"Type safety",
"Records",
"Type-level programming"
],
"language": "en"
},
{
"id": "https://rcalsaverini.github.io/blog/operational-semantics-for-monads.html",
"url": "https://rcalsaverini.github.io/blog/operational-semantics-for-monads.html",
"title": "Operational Semantics for Monads]",
"content_html": "<p><strong>Disclaimer: this is an old blog post from a very old wordpress blog and may contain inacuracies. I reproduced it as is for sentimental reasons. I may revisit this theme later.</strong></p>\n<p>While randomly browsing around on <a href=\"http://planet.haskell.org/\">Planet Haskell</a> I've found <a href=\"http://apfelmus.nfshost.com/articles/operational-monad.html#concatenation-and-thoughts-on-the-interface\">a post</a> on <a href=\"http://apfelmus.nfshost.com/\">Heinrich Apfelmus' blog</a> about something called "operational semantics" for monads. Found it very iluminating. Basically it's a form to implement monads focusing not on defining the bind and return operators, but on what the monad is really supposed to do. It's a view where a monad define a Domain Specific Language, that must be interpreted in order to cause it's effects. It seems to me it's exactly what is implemented in the <a href=\"http://hackage.haskell.org/packages/archive/MonadPrompt/1.0.0.2/doc/html/Control-Monad-Prompt.html\">monadprompt (Control.Monad.Prompt)</a> package, although I'm not sure.</p>\n<h1><a href=\"#the-operational-monad\" aria-hidden=\"true\" class=\"anchor\" id=\"the-operational-monad\"></a>The Operational Monad</h1>\n<pre><code class=\"language-haskell\">{-# LANGUAGE GADTs #-}\nimport Control.Monad\nimport Data.Map (Map, fromList, unionWith)\n</code></pre>\n<p>The definition of a monad on this approach starts with a common interface given by the following data type and a singleton function:</p>\n<pre><code class=\"language-haskell\">data Program m a where\n Then :: m a -> (a -> Program m b) -> Program m b\n Return :: a -> Program m a\n\nsingleton :: m a -> Program m a\nsingleton i = i `Then` Return\n</code></pre>\n<p>Note that the types of the data constructors Then and Return are very similar (but not equal...) to the types of the monadic operations (>>=) and return. This identification of class functions with data constructors is recurring throughout this post. This data type is instanciated as a traditional monad as follows:</p>\n<pre><code class=\"language-haskell\">instance Monad (Program m) where\n return = Return\n (Return a) >>= f = f a\n (i `Then` is) >>= f = i `Then` (\\ x -> is x >>= f)\n</code></pre>\n<p>This is all we need! As an example let's describe the implementation of the State Monad within this approach. This is exactly the first example given by Apfelmus on his post, disguised as a stack machine.</p>\n<h1><a href=\"#example-implementing-the-state-monad\" aria-hidden=\"true\" class=\"anchor\" id=\"example-implementing-the-state-monad\"></a>Example: implementing the State Monad</h1>\n<p>The operational approach to monads begins with recognizing what operations you want your monad to perform. A State Monad have a state, a return value and two function: one that allows us to retrieve the state as the return value, and one that allows us to insert a new state. Let's represent this in the following GADT:</p>\n<pre><code class=\"language-haskell\">data StateOp st retVal where\n Get :: StateOp st st -- retrieve current state as a returned value\n Put :: st -> StateOp st () -- insert a new state\n</code></pre>\n<p>This are the operations needed on the <code>State</code> Monad, but the monad itself is a sequence of compositions of such operations:</p>\n<pre><code class=\"language-haskell\">type State st retVal = Program (StateOp st) retVal\n</code></pre>\n<p>Note that the type synonym State st is a monad already and satisfy all the monad laws by construction. We don't need to worry about implementing return and <code>(>>=)</code> correctly: they are already defined.</p>\n<p>So far, so good but... how do we use this monad in practice? This types define a kind of Domain Specific Language: we have operations represented by Get and Put and we can compose them in little programs by using Then and Return. Now we need to write an interpreter for this language. I find this is greatly simplified if you notice that the construct</p>\n<pre><code class=\"language-haskell\">do x <- singleton foo\n bar x\n</code></pre>\n<p>can be translated as <em>foo <code>Then</code> bar</em> in this context. Thus, to define how you'll interpret the later, just think what's the effect you want to have when you write the former.</p>\n<p>Our interpreter will take a <code>State st retVal</code> and a state st as input and return a pair: the next state and the returned value <code>(st, retVal)</code>:</p>\n<pre><code class=\"language-haskell\">interpret :: State st retVal -> st -> (st, retVal)\n</code></pre>\n<p>First of all, how should we interpret the program <code>Return val</code> ? This program just takes any state input and return it unaltered, with val as it's returned value:</p>\n<pre><code class=\"language-haskell\">interpret (Return val) st = (st, val)\n</code></pre>\n<p>The next step is to interpret the program <em>foo <code>Then</code> bar</em>. Looking at the type of things always helps: Then, in this context, have type <code>StateOp st a -> (a -> State st b) -> State st b</code>. So, in the expression <em>foo <code>Then</code> bar</em>, foo is of type <code>StateOp st a</code>, that is, it's a stateful computation with state of type <code>st</code> and returned value of type <code>a</code>. The rest of the expression, <code>bar</code>, is of type <code>a -> State st b</code>, that is, it expects to receive something of the type of the returned value of foo and return the next computation to be executed. We have two options for <code>foo</code>: <code>Get</code> and <code>Put x</code>.</p>\n<p>When executing <em>Get <code>Then</code> bar</em>, we want this program to return the current state as the returned value. But we also want it to call the execution of <code>bar val</code>, the rest of the code. And if <code>val</code> is the value returned by the last computation, <code>Get</code>, it must be the current state:</p>\n<pre><code class=\"language-haskell\">interpret (Get `Then` bar) st = interpret (bar st) st\n</code></pre>\n<p>The program <em>Put x <code>Then</code> bar</em> is suposed to just insert <code>x</code> as the new state and call <code>bar val</code>. But if you look at the type of <code>Put x</code>, it's returned value is empty: <code>()</code>. So we must call <code>bar ()</code>. The current state is then discarded and substituted by <code>x</code>.</p>\n<pre><code class=\"language-haskell\">interpret (Put x `Then` bar) _ = interpret (bar ()) x\n</code></pre>\n<p>We have our interpreter (which, you guessed right, is just the function <code>runState</code> from `Control.Monad.State) and now it's time to write programs in this language. Let's then define some helper functions:</p>\n<pre><code class=\"language-haskell\">get :: State st st\nget = singleton Get\n\nput :: st -> State st ()\nput = singleton . Put\n</code></pre>\n<p>and write some code to be interpreted:</p>\n<pre><code class=\"language-haskell\">example :: Num a => State a a\nexample = do x <- get\n put (x + 1)\n return x\n\ntest1 = interpret example 0\ntest2 = interpret (replicateM 10 example) 0\n</code></pre>\n<p>This can be run in ghci to give exactly what you would expect from the state monad:</p>\n<pre><code class=\"language-haskell\">*Main> test1\n(1,0)\n\n*Main> test2\n(10,[0,1,2,3,4,5,6,7,8,9])\n</code></pre>\n<h1><a href=\"#vector-spaces\" aria-hidden=\"true\" class=\"anchor\" id=\"vector-spaces\"></a>Vector Spaces</h1>\n<p>The approach seems very convenient from the point of view of developing applications, as it's focused on what are actions the code must implement and how the code should be executed. But it seems to me that the focus on the operations the monad will implement is also very convenient to think about mathematical structures. To give an example, I'd like to implement a monad for Vector Spaces, in the spirit of Dan Piponi (Sigfpe)'s ideas <a href=\"http://blog.sigfpe.com/2007/02/monads-for-vector-spaces-probability.html\">here</a>, <a href=\"http://blog.sigfpe.com/2007/03/monads-vector-spaces-and-quantum.html\">here</a> and <a href=\"http://blog.sigfpe.com/2009/05/trace-diagrams-with-monads.html\">here</a>.</p>\n<p>A vector space $\\mathbb{V_F}$ is a set of elements $\\mathbf{x}\\in\\mathbb{V_F}$ that can be summed ($\\mathbf{x} + \\mathbf{y} \\in\\mathbb{V_F}$ if $\\mathbf{x},\\mathbf{y} \\in \\mathbb{V_F}$) and multiplied elements of a field ($\\alpha\\mathbf{x}$ if $\\alpha\\in \\mathcal{F}$ and $\\mathbf{x}\\in\\mathbb{V_F}$). If we want this to be implemented as a monad then, we should, in analogy with what we did for the State Monad, write a GADT with data constructors that implement the sum and product by a scalar:</p>\n<pre><code class=\"language-haskell\">data VectorOp field label where\n\n Sum :: Vector field label\n -> Vector field label\n -> VectorOp field label\n\n Mul :: field\n -> Vector field label\n -> VectorOp field label\n\ntype Vector field label = Program (VectorOp field) label\n</code></pre>\n<p>and then we must implement a interpreter:</p>\n<pre><code class=\"language-haskell\">runVector :: (Num field, Ord label) => Vector field label -> Map label field\nrunVector (Return a) = fromList [(a, 1)]\nrunVector (Sum u v `Then` foo) = let uVec = (runVector (u >>= foo))\n vVec = (runVector (v >>= foo))\n in unionWith (+) uVec vVec\nrunVector (Mul x u `Then` foo) = fmap (x*) (runVector (u >>= foo))\n</code></pre>\n<p>The interpreter <code>runVector</code> takes a vector and returns it's representation as a <code>Map</code>. As an example, we could do the following:</p>\n<pre><code class=\"language-haskell\">infixr 3 <*>\ninfixr 2 <+>\n\nu <+> v = singleton $ Sum u v\nx <*> u = singleton $ Mul x u\n\ndata Base = X | Y | Z deriving(Ord, Eq, Show)\n\nx, y, z :: Vector Double Base\nx = return X\ny = return Y\nz = return Z\n\nreflectXY :: Vector Double Base -> Vector Double Base\nreflectXY vecU = do cp <- vecU\n return (transf cp)\n where transf X = Y\n transf Y = X\n transf Z = Z\n</code></pre>\n<p>and test this on ghci:</p>\n<pre><code class=\"language-ghci\">*Main> runVector $ x <+> y\nfromList [(X,1.0),(Y,1.0)]\n\n*Main> runVector $ reflectXY $ x <+> z\nfromList [(Y,1.0),(Z,1.0)]\n</code></pre>\n<p>As Dan Piponi points out in his talk, any function acting on the base <code>f :: Base -> Base</code> is lifted to a linear map on the vector space Space field Base by doing (because this is the Free Vector Space over <code>Base</code>):</p>\n<pre><code class=\"language-haskell\">linearTrans f u = do vec <- u\n return (f vec)\n</code></pre>\n<p>More on this later. :)</p>\n",
"summary": "",
"date_published": "2010-08-25T00:00:00-00:00",
"image": "",
"authors": [
{
"name": "Rafael S. Calsaverini",
"url": "",
"avatar": ""
}
],
"tags": [
"programming",
"Haskell",
"Monads",
"Operational Monads",
"Free Monads",
"Free Vector Space",
"Category Theory"
],
"language": "en"
},
{
"id": "https://rcalsaverini.github.io/blog/stochastic-processes-as-monad-transformers.html",
"url": "https://rcalsaverini.github.io/blog/stochastic-processes-as-monad-transformers.html",
"title": "Stochastic Processes as Monad Transformers",
"content_html": "<p><strong>Disclaimer: this is an old blog post from a very old wordpress blog and may contain inacuracies. I reproduced it as is for sentimental reasons. I may revisit this theme later.</strong></p>\n<p>I have a difficulty to understand functional programming concepts that I can’t put to some very simple and natural use (natural for me, of course). I need to find the perfect simple example to implement to finally understand something. And I’m not a computer scientist, so things like parsers and compilers have very little appeal to me (probably because I don’t understand them…). I’m a physicist, so this drives me to look for physical problems that can be implemented in Haskell so I can understand some concepts.</p>\n<p>Monad transformers still eludes me. But I think I finally got the perfect subject were I can understand them: stochastic processes! First some book keeping:</p>\n<pre><code class=\"language-haskell\">import Control.Monad.State\nimport Control.Monad\nimport Control.Monad.Rand\n</code></pre>\n<p>Now, stochastic processes have characteristics related to two different monads. In one hand, they are dynamical processes, and the way to implement dynamics in Haskell is with state monads. For example, if I want to iterate the logistic map:</p>\n<p>$$x_{t+1} = \\alpha x_t\\left(1-x_t\\right)$$</p>\n<p>$$ teste = teste $$</p>\n<p>I could do the following:</p>\n<pre><code class=\"language-haskell\"> f :: Double -> Double\n f x = 4*x*(1-x)\n\n logistic :: State Double Double\n logistic = do x0 <- get\n let x1 = f x\n put x1\n return x1\n runLogistic :: State Double [Double]\n runLogistic n x0= evalState (replicateM n logistic) x0\n</code></pre>\n<p>Running this on ghci would give you, for example:</p>\n<pre><code class=\"language-haskell\"> *Main> runLogistic 5 0.2\n [0.6400000000000001,0.9215999999999999,0.28901376000000045, 0.8219392261226504,0.5854205387341]\n</code></pre>\n<p>So we can make the loose correspondence: dynamical system ↔ state monad.</p>\n<p>On the other hand, stochastic processes are compositions of random variables, and this is done with the Rand monad (found in <code>Control.Monad.Random</code>). As an example, the Box-Muller formula tells us that, if I have two inpendent random variables $x$ and $y$, distributed uniformly between in the \\([0, 1]\\) interval, then, the expression:</p>\n<p>$$\\sqrt{-2\\log(x)}\\cos(2\\pi y)$$</p>\n<p>will be normally distributed. We can write then:</p>\n<pre><code class=\"language-haskell\">boxmuller :: Double -> Double -> Double\nboxmuller x y = sqrt(-2*log x)*cos(2*pi*y)\n\nnormal :: Rand StdGen Double -- normally distributed\nnormal = do x <- getRandom\n y <- getRandom\n return $ boxmuller x y\n\nnormals n = replicateM n normal -- n independent samples from normal\n</code></pre>\n<p>Running this function we get what we need:</p>\n<pre><code class=\"language-haskell\">*Main> (evalRand $ normals 5) (mkStdGen 0) =\n[0.1600255836730147,0.1575360140445035,-1.595627933129274,\n-0.18196791439834512,-1.082222285056746]\n</code></pre>\n<p>So what is a stochastic process? In very rough terms: is a dynamical system with random variables. So we need a way to make the <code>Rand</code> monad to talk nicely with the <code>State</code> monad. The way to do this is to use a monad transformer, in this case, the <code>StateT</code> transformer. Monad transformers allows you to combine the functionalities of two different monads. In the case of the <code>StateT</code> monads, they allow you to add a state to any other monad you want. In our case, we want to wrap the <code>Rand</code> monad inside a <code>StateT</code> transformer and work with things of type:</p>\n<pre><code class=\"language-haskell\">foo :: StateT s (Rand StdGen) r\n</code></pre>\n<p>This type represent a monad that can store a state with type s, like the state monad, and can generate random variables of type r, like the rand monad. In general we would have a type</p>\n<pre><code class=\"language-haskell\">foo2 ::(MonadTrans t, Monad m) => t m a\n</code></pre>\n<p>In this case, <code>t = StateT s</code> and <code>m = Rand StdGen</code>. The class <code>MonadTrans</code> is defined in <code>Control.Monad.Trans</code>, and provides the function:</p>\n<pre><code class=\"language-haskell\">lift :: (MonadTrans t, Monad m) => m a -> t m a\n</code></pre>\n<p>In this case, <code>t</code> is itself a monad, and can be treated like one through the code. It works like this: inside a do expression you can use the <code>lift</code> function to access the inner monad. Things called with lift will operate in the inner monad. Things called without <code>lift</code> will operate in the outer monad.</p>\n<p>So, suppose we want to simulate this very simple process:</p>\n<p>$$x_{t+1} = x_t + \\eta_t$$</p>\n<p>where \\(\\eta_t\\) is drawn from a normal distribution. We would do:</p>\n<pre><code class=\"language-haskell\">randomWalk :: StateT Double (Rand StdGen) Double\nrandomWalk = do eta <- lift normal\n x <- get\n let x' = x + eta\n put x'\n return x'\nrunWalk :: Int -> Double -> StdGen -> [Double]\nrunWalk n x0 gen = evalRand (replicateM n $ evalStateT randomWalk x0) gen\n</code></pre>\n<p>The <code>evalStateT</code> function is just evalState adapted to run a StateT monad. Running this on ghci we get:</p>\n<pre><code class=\"language-haskell\"> *Main> runWalk 5 0.0 gen\n[0.1600255836730147,0.1575360140445035,-1.595627933129274,\n-0.18196791439834512,-1.082222285056746]\n</code></pre>\n<p>This is what we can accomplish: we can easily operate simultaneously with functions that expect a state monad, like put and get, we can unwrap things with <code><-</code> from the inner <code>Rand</code> monad by using <code>lift</code> , and we can return things to the state monad. We could have any monad inside the <code>StateT</code> transformer. For example, we could have another <code>State</code> monad. Here is a fancy implementation of the Fibonacci sequence using a <code>State</code> monad (that stores the last but one value in the sequence as its internal state) inside a <code>StateT</code> transfomer (that stores the last value of the sequence):</p>\n<pre><code class=\"language-haskell\">fancyFib :: StateT Int (State Int) Int\nfancyFib = do old <- lift get\n new <- get\n let new' = new + old\n old' = new\n lift $ put old'\n put new'\n return new\n\nfancyFibs :: Int -> StateT Int (State Int) [Int]\nfancyFibs n = replicateM n fancyFibs\n</code></pre>\n<p>And we can run this to get:</p>\n<pre><code class=\"language-haskell\">*Main> evalState (evalStateT (fancyFibs 10) 1) 0\n[1,1,2,3,5,8,13,21,34,55]\n</code></pre>\n",
"summary": "",
"date_published": "2010-08-03T00:00:00-00:00",
"image": "",
"authors": [
{
"name": "Rafael S. Calsaverini",
"url": "",
"avatar": ""
}
],
"tags": [
"programming",
"Haskell",
"Monad Transformers",
"Monads",
"Stochastic Processes",
"Probability Monad"
],
"language": "en"
}
]
}