A complete system for remembering knowledge: first thoughts

remember knowledge

There is no knowledge without remembering.
(“Knowledge…“, by Sandy Roberts / CC BY 2.0)

My thesis is that what we remember is us. This means we can mould ourselves by choosing what we remember and making a conscious effort for doing so. The Art of Memory provides the foundations for a method; the Great Books, the raw material to work upon. But the Art lacks systematisation, and remembering the Great Books is not just memorising them. The Art of Memory needs to be formalised, and the Art of Reading needs to be operationalised. This post talks a bit about both topics in a very intertwined manner. Further posts will be dedicated to each of the subjects touched upon. For now, everything is still night science, but I can see light gradually spring up in the horizon.

Knowledge comes from many sources: books, scientific articles, life experiences, oral teachings, introspection, dreams, you name it. But all these sources have, through the ages, been coalesced into the form of books, as a standardised way for storing and transmitting knowledge. No doubt some losses occur. It is hard, for instance, to imagine white men being able to reduce to words every single piece of knowledge aboriginal peoples have about their environment, which have been transmitted to them by oral tradition and rituals throughout generations. Nevertheless, much has been eternalised by ink, ever since men decorated their caves, and specially now, when language is manifested through bytes — or “digital ink”. And the best way to increase the signal-to-noise ratio of the surreal amount of written word today is, still, to find good books.

My vision is to create some sort off app to aid reading and memorisation. If this will ever come true doesn’t matter right now; what I want is to establish the modus operandi that would underlie such app, even if I will be forever applying it as a tedious and manual procedure. Thinking in terms of a complete app stimulates the formalisation and mechanisation of the method, since any machine demands clear and objective rules for operating.

As I’ve said elsewhere, the problem of remembering books can be thought of as a knowledge representation one. If books provided the best representational scheme for knowledge, nobody would have any trouble studying for exams. I see the problem as being two-fold. First, knowledge in books is slowly inculcated in our minds by a number of artifices to make reading (and learning) gentler; in the process, most of what is written ends up comprised of much linguistic elaboration surrounding a few nuggets of knowledge. We must disentangle them, keeping only what matters. Second, whatever remains after processing the information is a collection of sentences with varying degrees of cohesion and coherence; they are sentences structured in a way not conducive for proper memorisation. We must re-structure them, keeping their inherent meaning. Poets have solved this problem millennia ago; unfortunately, I know no poets and I am definitely not one.

I envisage the solution for these two problems as a process with four stages. First, we must know how to read well, so as to extract the gist of the text, while augmenting it with our own thoughts, reasonings and conclusions. Second, we must simplify the language used in the extracted quotations, both in terms of vocabulary and syntactical structure, to make it easier to mentally store the information. Third, we must encode the information into better signs for our minds — images — and we must compose them in a systematic (while still imaginative) way, so as to preserve sentence structure and, thus, correct meaning. Fourth and last, we need to create a super-structure to hold all different pieces of information together, in a way that our natural association capacity may assist us in remembering.

Computer-assisted reading

Note-taking and marking the text are necessary practices for reading well and a number of books and blog posts have been devoted to this subject. The sad truth, though, is that few people do it properly and even fewer use it for actually remembering the material read. The best readers do it almost unconsciously, through automatic procedures harnessed through years of diligent practice. The majority of us, however, struggle to find the best way to do it and, eventually, give up, resigned to simply reading to relax, to indulge in prose, to forget life’s troubles. The result is that we read badly. But far from reducing this problem to a simple lack of perseverance (although this does make part of it), I believe software can do much more than allowing us to highlight passages and to take notes. Computer-Assisted Qualitative Data Analysis Software (CAQDAS), while not commonly used for ordinary reading, are a great start. If we could embed them with the precepts of the Art of Reading, all the better.

Qualitative Analysis‘ most basic principle is called coding. While entire books have been written about this technique, it means, in its essence, to select certain snippets of text and to label them with meaningful keywords or even short phrases. The idea is that, when coding is done well, not only the inherent structure of the data emerges, but also new explanatory theories and interpretations may be attained. Of course, this is not accomplished by simply labelling important passages, but by reasoning about them, analysing their inter-relationships, and iteratively refining the codes to structure them into more meaningful constructs. CAGDAS, such as Atlas.ti, provide a number of functionalities to boost coding and, thus, understanding of texts. Why not using them for reading?

Writing is an art and, thus, encompasses a lot of creativity and imagination. But, like any art, in order to be successful, it must be grounded on fixed principles and rules, guidelines of form, structure and style. To better understand writing, we must have at least a tacit grasp of these inherent rules. Now, if there are rules, they can, to some extent, be coded into working software. There are a number of computer techniques today that are successful (in varying degrees) in parsing texts based on their inherent structure, using state-of-the-art Natural Language Processing combined with Machine Learning techniques and grounded on theoretical bases such as Rhetorical Structure Theory and Speech Act Theory. By combining subjective coding while reading with automated objective text understanding through computational algorithms, the reading experience may be greatly enhanced. It could make it easier for selecting the important terms, quotations and arguments from the text, and it could also include automatically enriching them with Web resources, introducing reader’s comments, and so forth.

Moreover, the final collection of selected quotations, codes and user notes, could be post-processed, either automatically or semi-automatically, in order to be structured into a hierarchical network — a graph of quotations. Such graph would have in its base level all the resultant material organised into a meaningful sequence, and in each bottom-up successive level, chunks of the material would be combined and generalised into new quotations representing their unity. I argue that such hierarchical organisation would be much more conducive to proper memorisation, but I know it sounds much simpler than it will be in reality. Further below, I elaborate on how that could be done.

Sentence simplification

Mnemonic memorisation is classically divided in two basic approaches. Memory for things (memoria rerum) aims at remembering the gist of the subject-matter. Usually, a portion of text (or other kind of information) is represented by one single image, complex or not, that provides cues for recalling the original text. Memory for words (memoria verborum or verbatim memorisation) aims at storing every single word in memory, therefore, demanding at least one image for each word and a lot of care to remember inflections,  proper names, verb tenses, etc. In spite of such dichotomisation, however, I see the matter as a continuum of approaches where memory for words lies close to one extreme and memory for things close to the other. Memory for words is not one extreme itself, because “memory for syllables” is possible (specially for difficult names) and even “memory for letters” might be needed (in the case of acronyms). But the common, more useful extreme is indeed memory for words. Now, it is even harder to set an extreme for memory for things since, assuming an ontological view, every concept is a child of a super-concept, up to a point that all concepts are “entities” of some sort. Therefore, simply remembering “entity” (as useless as it may sound) could be considered to be the extreme of the memory for things approach. However, I am sure some will argue that we should remember “God” instead and still others would vote for “Justin Bieber”. For these reasons, although, in theory, any proposal for a systematic mnemonic approach should allow any level of memorisation, I shall focus on an intermediary approach based on simplified sentences.

There is a huge discussion about wether we use language to represent knowledge in our minds, wether we think in words or images, how memories are represented. Leaving this fascinating discussion for another time, though, I should set the standpoint (even if extremely naive and uninformed) that “truth” always lies somewhere in the middle ground. There is no doubt that images aid memory, even if they are not their intrinsic building blocks, but we use language all the time, so I believe we must think in terms of it somehow — memories, at some point, must borrow from language structure and, thus, sentence structure. I propose that sentences should be the units of knowledge that we must strive to extract and memorise from texts.

But we conjure strange sentences sometimes (I, most of the time), which are far from ideal for memorising. Sentences should have few clauses (ideally only one), simple words, and adhere to the sentence structure we are most used to (SVO, or Subject-Verb-Object, in the case of English). Moreover, even if a quotation from a text is written in perfect English, an imperfect English based on our own words is much more memorable. Therefore, once the material to be memorised is extracted, it should be further processed and converted to simpler forms.

There are many simplified versions of English, but I believe that Charles K. Ogden’s “Basic English“, composed of just 850 words, is the most well-known. There is even a “Simple Wikipedia” where all articles, although not strictly adhering to just 850 words (or even the extended 1,500-word version), strive to use only simple sentences. By simple sentences, they mean sentence structures that don’t veer away too much from the SVO model. We should always pursue simple and clear ways to express ourselves while taking notes and summarising the texts we read, but there are also many computational tools to assist us today. Great advances have been made in Natural Language Processing techniques such as sentence parsing, text simplification and automated summarisation, all of which could be embedded in the “reading tool” as a post-processing unit. The motivation is to prepare the material for the next phase, which is the hardest and most “mnemonic” of all.

Encoding

The simplified sentences must be mnemonically encoded — “mnemocoded” — into images in order to be remembered. For maximum efficiency, words should have pre-established images associated with them, an “image lexicon” to be repeatedly used. There are many who are against that idea; even Francis Yates in “The Art of Memory” agrees with the author of Ad Herennium that this idea is stupid. But I disagree. Language works because it is conventionalised — imagine what would happen if we tried to concoct the best word every time we needed to convey a given meaning! Images should be pre-established; what we will do with them to make them memorable is another matter.

Now, how should we come up with such images in the first place? The central factor is the word’s concreteness. Concreteness is a psycholinguistic attribute of words that have been studied for quite a long time (see e.g., Paivio, 1968 and Brysbaert et al., 2014). If you can easily imagine yourself grabbing a thing and actually feeling it with as many of your senses as possible, that’s a concrete thing. Any word (specially, abstract ones) should be translated to the most concrete word possible. This is far from trivial, but three main methods are commonly applied for mnemocoding words and I’ll use this post to clearly distinguish between them. In the process, I hope also to establish some (needed) nomenclature.

The first method is sound resemblance, where similar-sounding words are looked for, irrespective of its meaning. So, the number “3” can become a “tree”, and the verb “to bear” can become the animal “bear”. The second method is semantic relatedness, where we seek to retain as much as possible the inherent meaning of the word. We might use a “heart” to mean love or feeling and we might use an “hourglass” to mean “time”. The third one, free association, might confound itself at times with semantic relatedness, but it has a marked difference. Free association is anything that comes to our mind when reading or hearing a given word. We might read “time” and think of an “hourglass”, in which case the result would be semantically related, but we could also think of a specific “It’s showtime!” movie scene or any personal event a long time ago. In a way, free association encompasses both other methods, but it is more general than both of them together. A possible “fourth method” can be seen as any combination of the aforementioned ones. The user tarnation of the Art of Memory forum is a master of this technique. His ingenious examples include:

“reason” – a personified raisin positioned like Rodin’s the Thinker -> (sound resemblance + semantics)
“economical” – an egoistic gnome shopping for bargains -> (sound resemblance + semantics)
“somewhat” – an electric meter (sum of watts) -> sound resemblance + free association
“unwise” – a bunch of stupid thugs, that is, (un)wise guys -> sound resemblance + free association
“recursive” – Jogging on a Moebius strip -> (semantics + free association)

We are talking here about each word contained in a sentence and how to encode them, but how to encode the entire sentence? In other words, how to preserve sentence structure? A straightforward answer (at least, once you are acquainted with mnemonics) is to use the link method. The first word-image would be linked to the second one; the second word-image would also be linked with the third one, and so forth. This would form a chain of linked images (a linked list, in computer science jargon) representing the entire sentence. I don’t like this approach because it lacks efficiency — a sentence with seven words would demand seven disjoint images with six pairwise relations (links). I have another idea, albeit one mainly untested until now, which is based on, what I like to call, semantic scenes.

The idea is to form a scene where each component-image corresponds to a word mnemocoded by semantic relatedness, and which has a defined role in the overall scene. The aim is to achieve better compression of the data to be memorised while making it more memorable. This is far from an original idea, just one that is not much used in current mnemonic practices. The most classical source of the Art of Memory, the Ad herennium, provides the best example of it, when the author “mnemocodes” a law suit. He states the case to be memorised as:

“The prosecutor has said that the defendant killed a man by poison, has charged that the motive of the crime was to gain an inheritance, and declared that there are many witnesses and accessories to this act.”

Then, he goes on to describe the mnemonic:

“And we shall place the defendant at the bedside, holding in his right hand a cup, in his left, tablets, and on the fourth finger, a ram’s testicles. In this way we can have in memory the man who was poisoned, the witnesses, and the inheritance.”

I want to focus on the visual aspect of it all, instead of just creating crazy associations. I want to try to strengthen the visual impression of the scene, striving to make it more magnificent or more dramatic or scarier. I was most impressed when I first tried mnemonic techniques and was able to memorise a deck of cards — I could “see” the images, and that amazed me! I want to conjure impressive images — scenes — as artistically as possible, almost as if they were the climatic scene of the evening’s presentation in an opera house. We see all kinds of nonsensical associations in, say, Animaniacs cartoons, but that doesn’t mean they are memorable. We use such associations for short-term memorisation, like memory champions do for remembering hundreds of cards in an hour, but I don’t think that’s the best approach for long-term knowledge. Moreover, if an opera or a Broadway show or a great movie can be so unforgettable, why not trying to emulate their awesomeness through another art, the Art of Memory? We can be producers and directors and even actors of the most fascinating stories, so why restricting ourselves to silly ones? I don’t think it is possible, or even desirable, to eliminate the freedom of conjuring whatever image we deem best, but I believe the Art of Memory can be greatly enhanced by a working knowledge of the Visual Arts.

If we could create distinct visual patterns for mental images, then we could retain original sentence structure just by the visual composition of our images. What would indicate the part-of-speech of the component-images of a scene would be their arrangement. I believe the principles of art composition, like in painting or drawing, can be used to great avail here; even the “Visual Language of Comics” might provide rules for “mental image composition” — a mental grammar — that would retain sentence structure, while still allowing freedom enough for our imagination to consolidate memories by being as non-sensical and exaggerated as possible. This is all very farfetched, I know, but a stimulating objective to pursue, nevertheless.

The “mnemograph”

So, we have read a book well by choosing its fundamental passages and adding any personal notes we wish, we have simplified and organised them into a graph of quotations and we have converted each node (sentence) to a semantic scene made of mnemonic images. Now, it is just memorisation, right? Not quite. The problem is that every level and every node of the graph is completely disconnected from the others. Remembering one node doesn’t assist us with remembering the adjacent or the subjacent ones. The “graph” of quotations is not really a graph: it is a hierarchy of quotations, where the edges connecting the nodes are not explicitly defined — they are just generic parent-child relations. We must connect the dots if we want to make this entire super-structure memorable, but there lies the problem.

Here is where I would really like to put memory palaces into the equation. I would like to turn each semantic scene into a memory palace and place the children-scenes in their loci. To do so, I would need to create imaginary memory palaces as needed and set the number of loci to match the amount of children-nodes. While this is probably possible, I haven’t found an operational method for doing so. Maybe it’s just my lack of imagination, but I can’t. And using real memory palaces is out of the question: it is simply not scalable enough. Of course, we could still generate hundreds of thousands of real loci beforehand, but even so, organisational issues would amass; notwithstanding the huge time demand for doing so. The truth is: while current mnemonic feats attest the unquestionable efficacy of memory palaces, I don’t think they are efficient enough for long-term knowledge storing. Thankfully, tarnation (the forum user) has come again to my rescue.

He has mentioned the idea of using a word’s letters or syllables to generate pegs to which subsidiary information could be attached. He uses specific types of pegs that can be used as loci in memory palaces built on-the-fly. This is the kind of on-demand memory palace that I have been dreaming about, but the needed visualisation skills are still beyond my level. Furthermore, the restriction of pegs to certain types might hamper the method’s scalability. I have decided to use his idea based on syllabic pegs as a starting point , but leaving the “memory palace” part of it out of the equation; hopefully, I’ll get back to them someday, but not now.

Basically, the problem is how to link a parent semantic scene to its children semantic scenes. A semantic scene refers to a sentence, which, in turn, is composed of words and syllables. Each syllable can, thus, be used as a peg to link to one child-semantic scene. Ideally, we should maintain a “peg dictionary” and select from there an image for each peg. This peg-image will then be mnemonically linked to the scene. However, if the peg modifies the entire scene, we might loose its (the scene’s) inherent semantics, thus hampering its decoding into the meaningful information we had memorised. In order to avoid that, we should select one specific image from the scene and link just that to the peg. By doing so, we clearly separate the edge from the nodes in the graph — the edge is the peg-image link, whereas the nodes are the intact semantic scenes.

For choosing such image on a case-by-case basis, the concept of main event (UzZaman et al., 2011) could be used to add some automation. The main event can be established using machine learning algorithms based on (among other things) the parsed tree of the sentence, but it can be seen simply as the main keyword of a sentence. So, to go from one parent node to its first child, the sequence of recalls would be something like this: think of a given subject and recall its main-event image (the easiest part to naturally remember); the main-event is the cue for recalling the entire semantic scene; the semantic scene reminds you of its description in words; the first syllable of the first word has a peg (retrieved from a peg dictionary), which is mnemonically linked to another image, the main-event of the child-node; this main-event will then be the cue for recalling the entire semantic scene and, thus, the description of the child-node in words. This linking procedure continues until the entire subject-matter has been reduced to a fully connected graph. The figure below is a conceptual graph-based representation of what has just been said, which will hopefully facilitate the understanding, but further posts will get deeper in this subject.

A "mnemograph" for remembering. A semantic scene is linked to its children-scenes by means of linking language-based images (black nodes) to semantic-based images (dark grey nodes). Each syllable of the underlying quotation corresponds to a peg that is linked to the main event of the subjacent semantic scene.

A “mnemograph” for remembering. A semantic scene is linked to its children-scenes by means of linking language-based images (black nodes) to semantic-based images (dark grey nodes). Each syllable of the underlying quotation corresponds to a peg that is linked to the main event of the subjacent semantic scene.

*

As I read and think about all these subjects that I deem related with my memorisation purposes, I slowly understand better my own idiosyncrasies: Why am I so crazy about this “systematisation” of mnemonics? I am beginning to understand (or to accept) that I always crave for finding logical ways of doing things — I need pattern (tarnation would say that’s because I am an ITNJ). And what is done in the realm of artificial intelligence/knowledge representation is to reduce the workings of the world into their constituent parts, finding the patterns that arise from them and formalising them into a precise language. In doing so, they allow “processed information” to be further reused in countless ways. Their main aim is for computers to use such information, but I argue that there are many other “humanly” uses too. A formal mnemonic representation of knowledge may one day pave the path for a system of remembering it all.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s