Watch with Mother…and with Dad, Brother, Sister, Nan, Grandad…

When considering the use of technology with young children today it is worth considering how it all began. The BBC launched their first TV programme for under 5 year olds in 1952:. It was called Watch with Mother. In fact one of my oldest memories is of watching The Flower Pot Men with my Mum. Yet research from the UK, the USA and Australia is now showing that many infants now watch TV almost exclusively on their own. The most significant challenge for early childhood technology designers has often been considered to be the challenge of providing the child with independent accessibility, so that they can play (and learn) on their own. Yet the consequences of encouraging such passive viewing or gaming as increasingly now identified by the research, are stark.

One major study found that 10% of children who were watching an average of 2.2 hours of television per day at age one, and 3.6 hours at age three had attention problems at age seven. Many more studies show correlations between TV viewing and obesity. Yet it doesn’t have to be like that. Research shows that when parents watch programmes with their children, the children tend to watch less television, and they also gain more from the experience. The same principles apply to other ICT applications in the home, and our experience with Made in Me showed us that computer software can be developed specifically for the purpose of adults and children sharing the playful learning experience.

The key lessons to be learned from the case of television is in fact transferable to all screen-based media: Where there have been problems, they have not been the result of the media or the technology itself, but the way in which they are sometimes being used or designed for the wrong purposes.

What is AI?

To appreciate the challenges and opportunities afforded by artificial intelligence (AI), it is at first important to recognize that the advances that we are currently seeing are not the result of new technological or computer science innovations such as increases in digital processing power or memory capacity.  In the development of AI, the challenge has always been to create machine learning systems that model the capability of the human mind, and the recent AI breakthroughs are the result of our better understandings of human cognition.  In fact AI may be considered to have increasingly provided a practical laboratory test bed for the development of our cognitive science.

The AI revolution that is now taking place offers computer applications that provide human-like performance in a wide range of contexts.  A major benefit is that computer can recall and process vast quantities of data, far more than any single human.  Computers also don’t get bored, tired, or distracted, they can work around the clock.

AI researchers have been teaching AI to learn as children learn (Hutson, 2018), and as early years educators, one way that we can begin to appreciate how the latest versions of generative AI actually work is to consider what it is that we know about how children learn. Children are emersed in an ocean of language from birth and as they come to interpret (recognise the significance) of words being spoken in particular sequences, they also progressively learnt to  pay closer attention to the ways in which particular words are commonly used (the synonyms, the common grammatical rules applied, the use of tense).  This general knowledge about how language is commonly structured supports their emerging comprehension and understandings of meaning (Buckley, 2003, p12).  This is a process that is accelerated significantly as they develop reflexive self-attention, and to words that often occur in combination and in particular contexts. 

As early years educators, we also know a lot about how children learn to read, and how we all learn through reading. We all know there is a lot more to reading than learning the phonemes of letters, and that our comprehension of a text, the meanings that we derive from it, accumulates as information is sequentially introduced and combined.  Paulson and Freeman (2003) tracked the eye movements that are used in the process of reading. We typically follow the red arrows – we don’t ‘read’ the letter sounds – we don’t even look at each word – we work out what it says as we go along… We look for meaning in the text.

Apparently we typically skip over 15% of all content words (nouns, verbs, adjectives and adverbs) and 65% of all function words (prepositions, conjunctions, articles, and pronouns (Paulson and Freeman, 2003).

As we continue to read, our short-term memory must be continually employed to retain key words if we are to understand e.g. the significance of it being ‘new’ and ‘red’, to Mark’s subsequent reporting it being stolen. If later in the text we learn that Mark sees his bicycle in a neighbours garden, our understanding will have required the elaboration of our short-term memory to have been sequentially revised to include the fact that the new ‘red bike was stolen’.  These feedback loops, that are required in comprehension are referred to in the literature as ‘recurrent neural networks’ and they are sequential in nature.  But some of the information is more important and we might be distracted or confused by information related to the park, the tree and the slide.  The amount of information we can temporarily hold in consciousness at any given time is also severely limited.  We therefore need to reduce the ‘cognitive load’ (Sweller, 1988) by paying special ‘attention’ to some of the information, and being ‘self-attentive’ in identifying (or predicting) the relevance of new information to emerging understandings.  This is a capability that progressively develops in early childhood.

As Piaget showed us, while a child may initially assume (learn) that there is a meaningful (transductive) relationship between coinciding yet entirely independent events, it will only be through future conflicting experiences that their learning can develop. A small child may, for example, believe that ‘dogs live in the park’, and it will only be as the result of a conflicting experience, e.g. of finding a dog living in a neighours home, that this invalid assumption is corrected. If the child is especially surprised it will due to the frequency of the prior confirming experiences. In advanced machine learning, confirming and conflicting data is provided by the very large data sets that is input in their ‘training’. The Chat GPT-3 AI, that can write jokes and poetry, and can write computer code, and engage in conversations with you, was trained on almost 45 terabytes of text data that included almost all of the public world wide web. It identifies and applies the natural patterns in language.

The revolution that has recently occurred in the functioning of AI systems is the direct result of our developing understanding of these cognitive processes. Put simply, the computer neural network has been designed to provide multiple computational units that are programmed to recognise patterns in the data provided. These units are organised in layers with each successive layer providing patterns that summaries the most significant parts of the input data passing this information on to the next layer. This is referred to as a ‘transformer’, and it provides the core technology in AI systems such as ChatGPT and Bard.  It is in this sense, that we can see that these developments in information processing, have been inspired by the advances in cognitive neuroscience, and that they are now providing a laboratory context for the further development of our biological understanding of how our brains actually work. 

Researcher has recently identified the role of astrocytes, non-neuronal cells that, along with neurons, may function biologically in just the same way as these computational transformers.  As Dmitry Krotov, a research staff member at the  MIT-IBM Watson AI Lab has put it:  “This is neuroscience for AI and AI for neuroscience,” All of this has huge potential for the further development of AI, and also conceivably, through neural interfacing, in unleashing the full computational potential of the human brain.

We have evolved as innate pattern finders, we seek patterns, identifying meaning in their correspondence with previously experienced patterns, and we continually seek and find patterns in the patterns that we accumulate.  The patterns may be images, sounds, any sort of sensory data. We are voracious in our constant searches for meaning. We are even occasionally conscious of the process. Some random sounds reminiscent of a tune, the allure of advertising posters and car registration plates that we cannot help from reading, it is the reason we see the young woman in the gestalt face of the old lady. Our ‘training data’ of accumulated experience is always our individual limitation, and one can currently only speculate about the potential of extending that with the posited direct neuro-interfaced access to the world wide web. While it may sound like science fiction, the first brain computer interfaces (BCIs) are already being used by many people with disabilities to support basic communication and control in their daily lives. Elon Musk’s Neuralink implant company has recently gained approval from the US Food and Drugs Administration (FDA) to carry out human testing. The current aim is limited to applying the technology to restore vision and mobility, but Musk has also argued that BCI could ultimately help ease concerns about humans being displaced by AI, that BCI offered the possibility of developing ‘superhuman intelligence’, and even the ultimate potential of ‘symbiosis with artificial intelligence’.

Meanwhile, the pace of technological progress of Generative AI is astonishingly, according to Kosinki (2023), for example, a theory of mind (TOM), the ability to identify the individual thinking of another human, may have already spontaneously emerged in the most recent large language model applied in ChatGPT which it is said can already function in these terms at the same level as a seven-year old.

The Land of Me: Unfinished Business

I served as founding Research Director for the Land of Me plc from February 2009 until its dissolution in February 2018. The innovative pedagogic design that I created at that time for The Land of Me remains the state of the art having drawn upon the very latest established understandings of young children’s cognitive and affective development, and the most effective roles that may be played by adults in supporting them. Since then, my work on schematic play carried out within and beyond my development of the SchemaPlay Community Interest Company has continued, and I am currently seeking a new opportunity to apply the insights that I have gained into the development of a new early childhood technology incorporating artificial intelligent (AI) systems in support of holistic and schematic play.

Siraj-Blatchford, J., and Brock, L. (2016) Early Childhood Digital Play and the Zone of Proximal_Developmental Flow (ZPDF) in the Proceedings I Congreso Internacional de Innovacion Y Tecnologia Educativa en Educacion Infantil, Seville, April

Siraj-Blatchford, J. (Ed.) (2004) Developing New Technologies for Young Children, (Ed.) Trentham_Books Espanol

Siraj-Blatchford, J., and Whitebread, D. (2003) Supporting Information and Communications Technology in the Early Years, Open University Press