MOSAIC (Model of Syntax Acquisition in Children) is a variant of the CHREST architecture, with some aspects of its learning mechanisms tailored to learning language.
MOSAIC learns words, and sequences of words. Just like CHREST, sequences of words are learnt and stored in a discrimination network, with individual words at the top, and longer sequences in deeper nodes. The network is trained by exposure to actual speech (for example, speech that a parent directs at a child). Output from the model is simulated by tracing down through the network, using the sequence of words encountered as a simulated utterance.
A few specific features of MOSAIC are that:
- nodes store the information required to reach them
- learning is biased towards the end of utterances, so words are only learnt if the following words of a sentence are already known (this bias is supported psychologically)
- new nodes are learnt based on a Node Creation Probability formula.
A full list of MOSAIC-related publications is available at http://www.chrest.info/fg/bibliography-by-topic.html#MOSAIC