All Our Posts

Valentines in the Classroom: Inclusive or Intrusive?

Prior to Valentines Day, one of our kid's teachers sent home this:

"I have spoken to the students and have stated that if you bring a Valentine for one student you must bring one for everyone. This ensures that this day of celebrating friendship and caring makes everyone feel included. I will send students home with a list of classmate names today for spelling and to ensure no one is forgotten."

We went ahead and polled our community on Facebook for their opinion on this inclusivity request.  What was striking was that the non-North Americans (mostly British in this sample) thought the request was crazily intrusive and that it had little to do with how they perceived the meaning of Valentines Day. In UK Valentine's are sent only to people you have a crush on and even then anonymously. 
The North American men who replied generally expressed some level of exasperation with the classroom appropriation of the event. However the North American women were almost unanimously supportive of the teacher's inclusivity angle - that the day was about general friendship not romanic love - and also took issue with the fact that the teacher's intentions had been questioned in the first place.

We'll let you conclude from that what you will. Here is at least some of what came back - do forgive some of the language!

 Female ​"Romance isn't about everyone feeling included, it's about specialness. Children must be allowed to express preferences. I'd sooner not allow it altogether than attempt this kind of Pc-gone-mad mess."

 Male ​"Eejit. I'm not even sure I'd be as kind as to suggest 'good intentions'." 

​Male "Absolute bollocks! Though you could take him at his word and send one card for someone special and a second, "...one for everyone.", with all the other kids' names on. That might teach him to give instructions with more clarity, too."

Male "LOL. how about the teacher buys a gift for everyone since she's the one deciding to take PC to a whole new level. Also, next time she rewards a student for good work she'd better make sure to reward everyone so that no one gets left out."

 Female  "True story: I was looking through a box of childhood school things, and found a cute handmade card by a 3rd grade classmate that read 'Bonne Fete - Je t'aime beaucoup'. I was both touched and puzzled, as I didn't recall who that girl was. Then I found a second, and a third card, identically phrased, and realized the students were just following orders. That's all very nice and fair of the teacher, but it doesn't compare with the feeling of special treatment!"

 Male "It used to upset me a little when some years as a kid I didn't get a valentine, but this was far better than the years I received a consolation Valentine."

 Female "I always gave valentines to everyone in my class because I wanted everyone to feel special. I didn't 'hate' anyone in my class, and it didn't hurt me to write a few extra cards... But it wasn't because the teacher had told me to!"

 Female "This is dreadfully controlling of children's personal choices. To take a simple tradition and tamper with it in the name of friendship is ridiculous. I hope she is buying a card for each of the staff and the whole of her street. Yes some people are special to us that's what makes them our friend. Some people are mean or we have nothing in common with them. Therefore why should we buy or make them a card?"

 Female "You're all a bit harsh on the teacher who is having to deal with this ridiculous day! I mean, they're only 9...it's not really about the romance, just the coerced craftmaking. It's like having everyone-over-5'5"-day- 'only the taller shall celebrate, the rest of you just sit around and smile through your tears....better luck next year'. Down with Valentine's Day! At least she said "if" you bring Valentines....."

 Female "Wow! So it's a pain to have everyone feel included. Sheesh. This isn't the same as participation badges. This is about sharing a nice thing with classmates. And yes...some people need to be reminded how to play nice. You can all go back to excluding the weird kid on the playground afterwards."

 Female "I think the teacher made a good choice. We are all one. If there is a kid that isn't popular, can we know truly what the cause of that is? Community is something that should be instilled early in life. These children who are not popular, or don't "fit in" need MORE love and inclusion, not less. This isn't really just about valentines, it's more about teaching - inclusion, love, acceptance and community. Start early, so your children know that everyone is important, everyone is worthy, no matter what." 

 Female "Oh come on, give her a break!!!! At 9 the whole idea of Valentines and love is so far from their minds, maybe even repulsive!! Including all kids and making it about kindness, friendship and fun. Yes there may be cards written that are not genuine, however, learning the lessons of not giving a f*** what others think in grade 4 when you are feeling left out just seems cruel. I will tell my child if she doesn't want to go through the effort to write them for everyone, not to bother. If she has friends that are extra special, she can deliver outside of school. I imagine being the kid that gets no cards when everyone else gets many, would be crushing. Let's teach age appropriate lessons here, maybe not love, but maybe compassion and tolerance."

 Male "Yeah. When I was in my 20's I was always disappointed when all the women in the bar didn't pay attention to me. Good thing I had developed the internal fortitude as a kid because I only got one or two valentines cards. ha!"

 Female "To turn that around... I think it's a serious issue, but this solution doesn't help. I was the one kid who, despite us having that rule also in the 80's had an empty valentine envelope at school. Kids quickly turn that rule around to "everyone *except*..." Or you get one card from the kid who announces, I only gave one to her because I was forced to...
My personal opinion is that some exercise in compassion and community is far better than the free-for-all cruelty of valentines cards at school."

 Female "I feel like writing to her and telling her that unfortunately things are not always fair in life!"

 Male "I think we should all like everybody's comment regardless of what it is they said And we should all send a valentines card to each other to make it fair"

 Male "He or she obviously lacks the English viewpoint-i.e. All is fair in love, war and cricket!"

 Female "Classrooms can be very cruel places for some children so, yes I ask kids to bring for everyone. We change the focus from the love thing.... to community. This year we are not doing them at all. We are making cards for each other and doing a buddy activity"

 Male "The key difference I think between UK and N.Am approach ...and hence the collective gasp at the teacher's instructions from the English /Australian posters here ...is that in UK (and Oz?) Valentine cards are STRICTLY anonymous. The thrill/fun of it if you actually get cards is that you don't know who sent them and you have to guess. 

 Male "In EU it's not like anyone is personally insulted by not getting a card from anyone else in particular. There is no requirement to send / give a card. Sure some kids might get none and some several, but as it is not remotely something that is turned into a class activity no one is keeping a strict tally - there is no Valentines league table. At worst you may get a bit of a hint to up your attractiveness ante/ be nicer/ take more baths / be less annoying / start wearing deodorant or shaving the nascent moustache etc." 

 "Female I think it's a good thing. It's a nice exercise to have to say something nice to everyone in your class. My mom always made me do a card for everyone, but, then again, she was a kick ass teacher like all of the incredibly dedicated teachers that grace the classrooms at our fine school trying their darndest to raise future nice humans."

 Female "Um yes. She grew up to be an adult who understood the concept of inclusiveness, and as a result, she is raising her son with that value as well, which is then passed on to my children who attend the same school and live in the same community. Obviously that skill wasn't solely cultivated by the inclusion of her classmates in Valentine's Day. But it did provide the opportunity for a discussion around the concepts of good will, and inclusiveness. It doesn't sound like it did her any harm. I'm not sure why you've got such a bee in your bonnet about this now. This ask has been requested by all of your boy's past teachers all the way back to preschool. Have you been charged with the Valentine card task? It is child craft labor as keona wisely pointed out. Maybe he doesn't hand them out this year at school, or he mails them."

​Female "And how about a little sympathy for the mums that are utterly shite at crafts?! Thank you for providing me this forum to speak out about my feelings of inadequacy at having failed my first year in this terrible ordeal! In my daughter's K-class alone, about 250-300 Valentines pieces circulated; some of which were clearly made by a mother with little effort or thought from the child. Where's the authenticity in that? Surely, there are better ways to explore topics of love and friendship in a way that is sensitive to everyone"

It Ain't The Length, It's The Lecture

Some research out of the University of Rochester recently suggests that optimal video length to ensure engagement is about 6 minutes. This is not a cognitive study, simply an observation of how long students can sit through a video on EdX without turning off or dropping out.

Tell that news to any good classroom teacher and they will smile at you patiently. Obviously they break a lesson into small chunks through which students can shift focus between different tasks, challenges and break states. Ultimately the teacher is doing all they can to engage, and keep students’ brains from trancing out (unless they are following Lazanov in which case that is the desired state). A classroom teacher is focused on a plan (whether rigorously planned or intuitively sketched) that assembles tasks around targeted and specific learning outcomes. The relevant expertise here is not the subject matter expertise. It is the expertise of facilitation. S/he is playing with the students brains to enable learning to take place.

So a lesson is a learning-oriented contruct. That is, it is not a lecture. A lecture is a format that responsively evolved as a means of efficient dissemination of knowledge to the adult consumer of said knowledge given the range of options in the pre-printing press world. The burden of responsibility for learning was placed firmly on the shoulders of the consumer.

But wait! The lecture format is the mode for the video portion of MOOCs (or video learning in general) and therein is the issue here – not video itself.

Video lectures are lectures on speed. The tendency is to give a lot of information fast without the more relaxed bonhomie of the bricks and mortar equivalent. And this is where I think the research here may be misleading. It ain’t the time span itself. It’s what happens in that time, and the challenges that places on the working memory of the consumer.

Think of George Miller’s classic rough guide of a limit of 7 (+ or -2) points or informational ‘chunks’  that people can retain in working memory from immediately recent or concurrent  input. This would suggest it ain’t the length it is the format.

The study’s recommendations to create ‘small, bite-sized pieces’ is well intentioned and true but does not address what a bite-sized piece means. It is not something to be addressed in minutes and seconds, but in terms of gently sequencing, layering or scaffolding the information being disseminated in a way that engages the consumer’s faculties in a range of different ways. Let alone the steps then required to provoke the brain into investing it’s energy into creating the necessary proteins to actually retain a long-term memory.

Now that is actually a big deal. It is a philosophical shift that takes some of the burden of responsibility for the learning from the consumer back to the supplier. It puts the teacher as facilitation expert as opposed to only the teacher-as-knowledge-expert back into the room. It is not difficult conceptually. It is what good teachers do everyday. But it is tricky with current MOOC platform video capabilities.

It’s just not about the creating of video itself into bite-sized pieces, just as the secret to a good lesson is not to limit its duration to 6 minutes. The secret to a good lesson is to structure it into unfolding scaffolded chunks  that engage the learner without overburdening them. There is no reason why that cannot happen with extended video given the right platform that can create active rather than passive learning. That has been exactly our mission at Learnbase.

Towards A Litmus Test For When To Issue An Open Badge.

Please note: This is an abridged version of a previous blog, focussing  here purely on the issue of finding a litmus test for open badges.


In discussions around the development of an approach to Open Badges the same worries and examples of poor practice that trivialize the value of a badge come up again and again.

This is why it would be useful to have litmus test to judge the circumstance in which a badge should be awarded.
My approach to a solution would be to borrow in part, Howard Gardner’s wonderful definition of (an) intelligence:

‘An intelligence is the ability to solve problems, or to create products, that are valued within one or more cultural settings.’  (Gardner, 1983/2003)

Rephrase this and we may have the basis (basis, mind!) of a litmus test for when a badge can be issued. Let’s call it a badge test:

A badge can be issued when the learner has done either of the following:

  • Solved a problem
  • Created or provided a useful product or service

To borrow an illustration from Pete Rawsthorne’s blog on badge system design in which he represents (metaphorically and for sake of simplicity) a tea-making skill hierarchy as such:


Given the badge test, the three micro skills should not be ‘badgeable’ because it is doubtful whether they solve a problem or create a useful product/service. The selection of the tea bag is no use in itself until the macro event, the tea, is made (and then hopefully only if suitably satisfying). That is where the badge, in this metaphor, could be awarded. Issuing the badge for a micro skills that does not pass the badge test devalues the whole concept of open badges.
Something missing?
But wait! It would seem that completing, say, an arduous track event, the winning of a soccer tournament, the ability and discipline of meditating or hiking a mountain trial would potentially warrant a badge. However, they do not directly solve a problem or create a useful product as per the badge test so far.
I would therefore add a third aspect to the test that considers if the learner achieved a goal contributing to their physical, emotional or spiritual state of being. Add this to the mix and the badge test becomes:

A badge can be issued when the learner has done one or more of the following:

  • Solved a problem
  • Created or provided a useful product or service
  • Achieved a goal contributing to their physical, emotional or spiritual state of being

What is interesting about this formula is that it challenges us to rethink the why and how we awarding education. It challenges not to just give a badge where there was once a certificate, but to either redefine the assessment criteria or provide an alternative or additional layer to the recognition of attainment. I like this latter alternative more because it does not set one side against another but rather offers simultaneously an alternative and an addition.
To see a real example of this in action at the (K end of the) K-12 level please refer to my previous blog 

Using Open Badges To Give Focus And Meaning To Descriptor-Based Assessment

One thing that excites me about the Open Badges movement is how is can help to remedy, rather than merely challenge, the paucity of descriptor-based assessment that has been the norm these last twenty years. To ensure this, we need to insist that badges, especially in this early phase, escape becoming merely descriptor-based in themselves. To achieve this we need a find a simple but universal litmus test to help us gauge when an Open Badge can be issued.

What lead me to thinking about this recently was my son Jack’s terrible school report. Wait! Before you offer me a sympathetic chin clench, let me clarify. My boy did just fine, I think, as far as I can tell. It was the actual report format that was terrible: a list of forty-five descriptors so decontextualized as to be essentially meaningless. A list in which a sense of my child’s learning or any meaningful communication of his merits and needs is strangely absent. A list that the teachers find as uninspiring a process to compile as I do to read.

By way of example, the Grade One (term 1) Social Studies section tells me that my child:

  • Is able to identify various Canadian symbols
  • Can locate Canada on a globe or map of the world
  • Is able to present information orally, visually, or in written form as assigned
  • Is able to identify ways to address problems at school (eg, litter, taking turns with equipment)

The first two are fairly specific but on reflection not that thrilling in terms of possible application, the last two so vague they render themselves redundant. They are meaningful only in terms of how they are being applied, what context they have been observed in and whether my child perceives any value in these skills, but there are no such clues given.

Essentially they are micro-skills bereft of any ‘macro’. They feel like asking for an app and just getting the code and no place to paste it. As being able to see computer code is useful to a coder but not a general user, the descriptors may be useful for the teacher as micro-skills that pertain to some larger goal. They may help plan the route and the ground that may to be covered but in themselves they predicate neither assessment nor reward in any meaningful way.

So when should a badge be issued?

In the threads of discussions around the development of an approach to Open Badges the same worries and examples of poor practice comes up again and again, especially in the context of K-12. That is, what is the circumstance in which a badge should be awarded? The worry is that a badge is given for too trivial an expectation, for example, badges given for not running down a corridor, for handing in homework and the like. I would suggest that the above descriptors are no better candidates.

This is why we need a litmus test.

My approach to a solution would be to borrow in part, Howard Gardner’s wonderful definition of (an) intelligence:

‘An intelligence is the ability to solve problems, or to create products, that are valued within one or more cultural settings.’  (Gardner, 1983/2003)

Rephrase this and we may have the basis (basis, mind!) of a litmus test for when a badge can be issued. Let’s call it a badge test:

A badge can be issued when the learner has done either of the following:

  • Solved a problem
  • Created or provided a useful product or service

The above-mentioned Social Studies descriptors fail to stand up to either of these requirements. To do so they would need to be part of a larger task the result of which would be to solve a problem or create a relevant product/service. I am no primary specialist, so purely by way of illustration, imagine:

Task: The Grade One class is going to help the Kindergarten class learn about North America. There is a display on North America showing a map of Canada, the US and Mexico and pictures of Canadian, US and Mexican symbols on cards that can be stuck on the relevant country. But oh no! The symbols have all been mixed up! Using picture books on the three countries and information or course work from previous lessons the students, in groups, need to work out which symbols belong to which country and put them back in the right places. Each group then decides how to take turns explaining to the rest of the class their decisions and can change the placements if they choose to.

The cards and maps are decorated and displayed. The students then write instructions, based on set sentence patterns that can be read out to the (imagined or actual) younger students to help them remember which country is which on the map (ie ‘Canada is the county that has the red and white flag’, ‘Mexico is the country with the pyramids’ etc).

This way, all four micro descriptors are covered but within a context that solves a problem (the mixed-up cards) and creates a useful product or service (helping the younger learners learn). If the task is achieved the badge can be given. However, if the task is partly achieved but particular descriptors are not met there is clear context for the learner as to where this should be applied. Each descriptor is implicit to the task rather than an abstract ability or proclivity. The learner can be guided, and the teacher targeted by the descriptors which have become routes rather than destinations.

To borrow an illustration from Pete Rawsthorne’s blog on badge system design in which he represents (metaphorically and for sake of simplicity) a tea-making skill hierarchy as such:

Given the badge test, the three micro skills should not be ‘badgeable’ because it is doubtful whether they solve a problem or create a useful product/service. The selection of the tea bag is no use in itself until the macro event, the tea, is made (and then hopefully only if suitably satisfying). That is where the badge, in this metaphor, could be awarded. Issuing the badge for a micro skills that does not pass the badge test devalues the whole concept of open badges.

Something missing?

But wait! It would seem that completing, say, an arduous track event, the winning of a soccer tournament, the ability and discipline of meditating or hiking a mountain trial would potentially warrant a badge. However, they do not directly solve a problem or create a useful product as per the badge test so far.

I would therefore add a third aspect to the test that considers if the learner achieved a goal contributing to their physical, emotional or spiritual state of being. Add this to the mix and the badge test becomes:

A badge can be issued when the learner has done one or more of the following:

  • Solved a problem
  • Created or provided a useful product or service
  • Achieved a goal contributing to their physical, emotional or spiritual state of being

Is there anything missed in this formula? I’ll be happy to hear.

Neil, Learnbase

Weary Of Online Learning ‘Environments’? Who’s For Dim Sum?

I’m growing weary of traversing online-learning environments. I’ve built them. I’ve used them. I’ve explored them. I’ve got lost in them. I’ve spent a lot of time wandering these environments, frequently spending way too much time finding my way to, through and back to the content and tasks when all the time I just want to explore the content itself. And I have expected a lot of learners to do the same in the environments I have designed.

As learning designers I think we forget that, for the learner, the content is a new landscape of its own and one in which the learner should be immersed. We should not expect them to learn how to navigate to each destination. They need to have arrived already. We’ve been sending the learners out laden with tools and baggage, not inviting them in, unencumbered.

So I’ve been playing with a new metaphor, one that does not abandon the environment metaphor but rather sets up a simple and inviting camp within it.

Now, once upon a time, merchants on the Silk Road, tired of the journey, would stop at tea houses to rest, refresh and share. As travellers of myriad cultures sat shoulder to shoulder at tables, and wide ranging tastes needed to be catered for fast and efficiently, there emerged the concept of carrying a range of enticing choices in small bamboo dishes to and through the tables, to be selected and eaten individually or shared, quickly, before the next choices come by.

Dim Sum!

This is the playful design brief we are using for the platform we are currently building at Learnbase / Agentic. An approach where we no longer expect the learner to traverse a landscape but to sit comfortably, in good company if they so wish, and be served with the content that they can get right down to selecting and enjoying.

So in practice what has this meant?

  • Less is more: Create as few pages as possible within the platform.
  • Simplicity: No flash, bells and whistles. Just a solid but textured interface, engaging, calming and in which each page keeps its integrity to the other.
  • Social embedding: Place discussion threads directly within learning events rather than in a separate dislocated forum area.
  • Open door: Drop in and leave as you need, the conversation keeps going.
  • Unencumber: Minimize the need for anything outside of the space while in the space.
  • Presentation: Seamless integration of video, slides and prompt questions in real time.
  • Choices: Allow the learner to focus in, zoom out, stay personal, go social, and respond formally or informally as suits their style and mood.
  • Service: Take the learning off the shelf and to the learner.

Funnily enough, or accordingly, as the metaphor has shifted, our company logo has transformed while staying just the same. I first saw it as symbolizing a base from which to explore the wilderness. Now it’s a space we are holding within that landscape:

Avoid The Confusion Of Generational Cusps!

A little generational analysis goes a long way in managing teams of disparate ages. But the edges get kinda blurry. No two sources can quite seem to agree when the precise cut off dates are, which is only right given all the factors that can make one either at the heart or the toenails of the zeitgeist of your formative years.

So here is a quick and fail safe musical guide for identifying who is really who on those nebulous cusps.

Veteran or Boomer? If the answer to the question, ‘Do you like the Beatles?’ is ‘Oh yes. Well, until all that drug nonsense’. Veteran

Boomer or X? Winces at the term ‘concept album’? X

X or Y? Nods unconvincingly to a confiding ‘Jeez, it’s like punk never happened’? Y

Y or Millenial? Is it about vampires? Not Y.

10 Simple Suggestions for Online Instructional Design

Simply, I feel that online learning should be:

  1. community anchored
  2. peer assessed and peer negotiated
  3. expert assessed and expertise driven
  4. human as well as automated
  5. lateral as well as linear
  6. now as well as then
  7. brought to you, not you to it
  8. both map and terrain
  9. graceful, simple and beautifully connected
  10. beyond LMS

 

Applying Gardner’s Multiple Intelligences Theory to Merill’s Principles in Learning Design

A few quick connections…

Merill’s First Principle is that learning is promoted when learners are engaged in solving real-world problems. Gardner defines intelligence as ‘the ability to solve problems or create products that are valued within one or more cultural settings’. The first implication when these two views combine is to suggest that learning is promoted when an intelligence (or combination of intelligences) is being engaged.

The second implication here would be that just because a real-world problem has been set, the learner is not necessarily engaged unless the problem matches their intelligence ‘proclivities’. Or at least different learners may be engaged to differing degrees. An example here would be the sheer amount of logical puzzle-based tasks in language courses that generally vastly outnumber tasks requiring (or developing) verbal-linguistic skills. While they may engage the more logical-mathematically inclined learners, and help to engage them with the subject, they render much of the course as interesting to the learner as a crossword or sudoku puzzle. This is itself rather hit and miss in terms of learner appeal, but also clouds and confuses the presumable learning goal of using language communicatively with strategy to engage the learners the content.

The overriding implication may therefore be to create frameworks for users to approach and solve a task through a number of different ways – to personalize the problem somehow. This goal is more realizable given Merill’s definition of (or ambition for) a problem to be a ‘whole’, ‘authentic’ task which by definition should be approachable and solvable through a number of routes, activating and appealing to a range of intelligences rather than a specific one.
The Second Principle is one of activation: that learning is promoted when relevant previous experience is activated; that can be used as a foundation for new knowledge, and when learners can recall a structure that can be used to organize the new knowledge . Gardner’s theory provides a number of avenues for prior experience or knowledge to be brought to bear.
For example when asking a group to create a presentation, learners can be invited to consider previous experience through a number of entry points: models of representing statistics visually  (logical-mathematical and visual-spatial intelligences) , using plain English  (linguistic), appropriateness of music (musical), activating the intended audience (interpersonal), providing and comparing realia/real samples (naturalistic), contextualizing (existential), recommended note-taking or provision (intrapersonal) etc…
Care however should be taken, as Merill advises, to keep the learners targeted as irrelevant prior experience brought to the table could increase the cognitive load rather than individually contextualize it.

The Third Principle is that of demonstration. Here Merill actively recommends that multiple representations  are used for demonstrations and that multiple demonstrations are explicitly compared (with the caveat that multiple forms of media do not simultaneously compete). Approaching each iteration of a demonstration through contexts that engage a different blend of intelligences is an obvious extension of this.

Where multiple intelligences most clearly come into play is with the Fourth Principle – that of application: when learners are required to use their new knowledge or skill to solve problems.
This must surely be tied to the first principle in terms of the foundation of previous knowledge that has helped to activate the acquisition of the new. This is at the heart of Gardner’s definition – and would seem to me to be the key difference (and frequent confusion) between learning styles and multiple intelligences which is that while learning styles offer guidance to the presentation of content, multiple intelligences offer a greater framework for actual engagement of the student through task. This also fits into Merill’s corollary to the principle: that ‘learning is promoted when learners are required to solve a sequence of varied problems’.

Lastly, Merill’s Fifth Principle,of integration, speaks directly to Multiple-Intelligences Theory.  The principle states that learners are encouraged to integrate or transfer the new knowledge or skill into their everyday life and his corollary that ‘learning is promoted when learners can create, invent, and explore new ways to use their new knowledge or skill’. It is unlikely that a learner will ‘explore’ enthusiastically without a basis in that intelligence type, or will find ways to apply the new skill to area in which they are strong. This is very much behind the all-to ubiquitous phenomenon of learners passing the test (and even ‘getting’ the class) but failing to apply the learning. If a broader range of intelligences has been successfully catered for within a course there is more chance that the learners will have made the links to help them carry through the learning into the ‘real’ world.

How Reliable Are Multiple Intelligence ‘Quick’ Tests?

Note: This text is from a larger dissertation. Labeling of figures retains original system.

One of the legacies of prevalent learning styles theories on Multiple Intelligences Theory is the assumption that a student’s MI profile can be uncovered by the use of a quick test or checklist.

The central challenge of applying MI theory to education is its inherent lack of testability, and Gardner’s resistance to the use of quick psychometric tests.

‘…for most of us in Western society, intelligence is a construct or capacity that can be measured by a set of short questions and answers, presented orally or in writing.’ (Gardner, 1999, p135)

Indeed, Gardner has much to say on the topic. Of course certain of the intelligences can be more feasibly tested: linguistic and logical-mathematical intelligences are essentially what traditional tests have assessed and Gardner himself was initially drawn into the search for suitable assessment tools to create such a test but soon found ‘…that the standard technology could not be applied appropriately to several of the intelligences. For instance, how do you measure someone’s understanding of himself, or of other people, using a short-answer instrument? What would be an appropriate short-answer measure of individual’s bodily-kinesthetic intelligence.’ (Gardner, 1999, p136)

His eventual response was testing MI in the spirit of MI and the resulting ‘Spectrum’ project which more resembled an interactive hands-on section of a children’s museum: a range of activities and materials that kids could explore so that their intelligence profile could be uncovered over time.(Gardner, 1993) The resulting assessment ‘product’ was neither a grade nor a percentage point, but, at year end, an essay revealing the child’s intellectual profile and informal advice as to possible applications of that profile.  Within many educational settings – particularly that of ELT, such testing is, of course, wildly infeasible.

Multiple Intelligence Checklists

And so quick tests, or checklists abound.  While tests vary in their validity, reliability and supporting data, all MI tests would seem to face the following conundrums:

1 Paper based tests are inherently linguistic in nature – and in fact are part of the original problem that Gardner sought to find an alternative to.

2 All but linguistic, logical, and to an extent spatial intelligence are too difficult to pin down in paper based tests.

3 While a Likert  scale (preferable with a ‘not-applicable’ option) will alleviate the extremities of the following reservations, the following inherent flaws need be born in mind when selecting a checklist.



A:  People with lower interpersonal and intrapersonal intelligences will, by definition, be less able to accurately self-reflect: the intrapersonal being more self aware of strengths and weaknesses; the interpersonal more able to judge how others respond to you.

Are you a good judge of character? (MIDAS)

I have a pleasant singing voice. (LDP)

Can you sing ‘in tune’? (MIDAS)

B:  Such tests cannot adequately distinguish between a person’s real skill in an area and a mere interest (compare, for example, someone who enjoys moderate exercise and watching sport on the TV, and a fully fledged professional athlete.)

I enjoy physical exercise. (MB/ WM)

I enjoy art activities. (MB/ WM)



C:  Popular misconceptions of intelligence abound, in particular the intrapersonal/introvert confusion, or the actual nature of naturalistic intelligence

(On intrapersonal intelligence)

I would prefer to spend a weekend alone in a cabin in the woods rather than at a fancy resort with lots of people around. (LDP on intrapesonal intelligence)

I go to the cinema alone. (MB/ WM)

Classification helps me make sense of new data (WM on naturalist intelligence)

D: Often questions target a particular intelligence that actually sum up attitudes that few people would deny

My life would be poorer if there were no music in it. (LDP)

I enjoy informal chat and serious discussion. (WM)

I enjoy talking to my friends (MB/ WM)

I can tell you some things I’m good at doing. (MB/ WM)

E: Questions in response to which someone with a higher specific intelligence may grade themselves lower than someone without that intelligence strength due to increased awareness or sensitivity within specific related domains.

I’m sensitive to colour (LDP)

I can tell when a musical note is off key. (LDP)

I’m a good singer. (MB/ WM)



The ‘MIDAS’ Test

One assessment tool that has been written, not specifically with ELT in mind, but with an awareness of potential misunderstanding and misuse of MI, is the MIDAS (Multiple Intelligences Development Assessment Scales) test. Its developer was keen to avoid the following ‘pitfalls’ of the quick checklist that can create, or perpetuate, ‘superficial’, ‘quick-fix’ and ‘mindless’ understanding.  (Branton Shearer 2005:1-2)

While the test, on the surface, may look like any other, the test was developed over a six-year period, has items which may score in different intelligence areas, and assess subsets of skills with the broader intelligence boundaries.  Musical intelligence, for example, is divided into appreciation, instrument, vocal and composer, kinesthetic into athleticand dexterity. The MIDAS development involved a huge sample and correlations between intelligence and job type have strong correlations.

The professional/administration version is only available after the administrator has completed an assessment of knowledge of fundamental MI concepts.  In this way the developer seeks to limit misuse through limited understanding of MI and also place the assessment not in the hands of a student, but in the hands of a practitioner with proven expertise. It also has to be paid for. So unless a practitioner with institutional backing is committed to pursuing MI theory in their school, this is not a test that is going to be used and so is not the test used in this research.

I am more interested in using and assessing the sort of checklists that are both instantly and freely available (copyright permission notwithstanding) to an ELT practitioner either on the web or included in an MI resource book, as that is the sort more likely to be actually used. However I also sought to see how such tests can be used carefully.

The best checklist I could find beyond the MIDAS was the test found in the non ELT specific book ‘So Each May Learn (Silver et al 2000).  Due to the fact that this test, or to my knowledge any test, includes no descriptors for existential intelligence, I added 10 ‘existential’ questions. The subsequent stages of the development project helped us, as a team, to gain a deeper understanding of how to apply the intelligences.  I now see that my questions are to some extent flawed. The questions concern themselves with the bigger elements of ‘existential’ intelligence, but could perhaps have focused on smaller ethical and interest focused issues.

Reliability

As the checklist results were needed for developmental as well as research purposes, reliability assessment using a regular technique such as split-half method were inappropriate in so much as all investigation needed to be done in parallel and with sensitivity to the development project rather than compromising it by making the participants feel like specimens. This limited my options and I realize that the best I could hope for was the opportunity to reveal a tendency of reliability as opposed to more concrete ‘proof’. I chose a rather extreme alternate form approachby devising 2 different instruments which, while being less transparently MI related (so the participants would not feel they were repeating the same activity) and enabling discussion and reflection in workshop contexts, would also give me an indication of internal reliability while acting also as an indication of intra-observer reliability.

For external reliability I used an inter-observer technique that could also play a legitimate role within a development as well as research context.

I would suggest that MIT itself has implications for the nature of determining reliability of such MI checklists.  The Johari Window (Luft, 1969) of Figure 3.1. frames the situation well.



The Johari Window (JW) is a simple heuristic device illustrating communication about self to others, and communication from others about the self. MI checklists usually stay firmly within area 1: that which is obvious to the individual within the ‘public’ realm.

Firstly, the model helps to demonstrate how the personal intelligences will affect the success of the self-assessment.  Interpersonal individuals have a greater propensity to see how others react to their own actions and therefore be aware of non-verbal feedback from area 2 without necessitating actual feedback.  Interpersonal individuals who are also strong intrapersonally should, with their increased self-knowledge and propensity to self-reflect, be that much more able to incorporate such insights into behaviour and self-image. By comparison, individuals with less ‘personal’ proclivity, or less balanced intrapersonal and interpersonal intelligences, will, if the theory is correct, be less able to access and assimilate non-verbal feedback and be less able to accurately self-assess.

The impact of area 2 will also be an affective factor in the self-assessment of people who are highly skilled in a more performance related domain of a particular intelligence.  For example during the testing phase I observed that a participant who sings in a well-known a capella group rarely rated herself beyond a score of 3 in any question related to musical productive skills.

To increase feedback from JW area 2 therefore, I requested that in addition to the teachers doing the test themselves, they gave a copy, where possible, to a close friend or relative who were asked to complete the checklist aboutthe teacher. Any score (on a 1 to 5 Likert Scale) that had a difference of 2 (in the testing stage a difference of 1 seemed too subjective) were then to be discussed with the partner. Based on this input the teacher could then choose to adjust their original score for that descriptor.

The main challenge in assessing the internal reliability of the tests is the question of what to compare to for correlation, given the infeasibility of putting the staff through anything like Spectrum Project. In keeping with the ELT context I therefore created two tests for correlation assessment. (Importantly these were given prior to the MI test, which in turn was given before any input on the theory itself. I felt this was important for maintaining a more objective response than may have been obtained had the teachers known more about the theory and second-guessed the questions.)

The alternate-form tests were:

1) A survey in which teachers rate a list of 27 potential unit topics , that is 3 examples from each intelligence type (see appendix 3). The teachers were simply asked to rate the 3 they would most prefer to teach and the 3 they would least prefer. These results were then to be correlated with the results of the MI test.  Again, I by no means expected to find any direct correlations as the two tests are completely different, but rather was seeking a tendency of correlation.

2) A survey, similar to the first, but listing classic classroom activities (see Appendix 4). The list was adapted from Sliver et al’s ‘So Each May Learn’ with additional existential activity types. (Silver et al, 1997:102-103)   Looking back, I would have framed these existential activities differently. As mentioned, at the start of the project may own grasp on how to actually apply existential (and naturalist) intelligence to the classroom was, with hindsight, hazy. The survey comprised 63 items, 7 items for each intelligence. Teachers were asked to mark each one using the rubric shown in the survey appendix 4.

I had anticipated that the relative specificity of activity types, as opposed to the generic thematic areas of survey 1, would provide greater correlation to the MI test, in part due to the fact that in reality teachers have more day to day experience of selecting activities than they do course unit themes.

Inter-observer Reliability of the Multiple Intelligences Indicator

As explained in Chapter 3, prior to giving the teachers the Multiple Intelligences Indicator (MII), two questionnaires were given to further indicate the teacher’s individual intelligence profiles. No input on the theory itself had been given at this point.  Despite the actual sequence, to contextualize the two questionnaires for the reader I will present the MII results first.

Of the 18 MIIs given and returned, 12 were completed with a partner. These 12, therefore, are the focus of the correlation study. The full table of data is shown in appendix 4.1. For the purposes of individual reflection the teachers were each given a chart to visually represent the results as in figure 4.1.1. I will briefly discuss these individual results before discussing the results more generally.



Figure 4.1.1. Example of an Individual Teacher Profile (1)

This particular chart will reveals a general tendency of agreement between the teacher (‘My’) and the partner for most of the intelligences.  Note that actual ‘scores’ are omitted to focus attention on the relative levels of each intelligence indicted, and away from the arbitrary nature of the scores themselves (i.e. so not to be contrasted as ‘absolutes’ with other participants). The teacher is generally rating herself lower than her partner. This may be an objective interpretation of the 1 to 5 Likert scale, in which the partner is providing a more ‘flattering’ assessment of the teacher than the teacher herself. Conversely, it may be based on a more subjective interpreting of the scale descriptors. The advice to the teachers to discuss each questionnaire item in which the two scores differed by two or more points rather than by one sought to add balance to such subjectivity. The resulting conversations between the teacher and partner show that the teacher did, in each case, seek to balance her own scores in the light of the partner’s feedback.

Interestingly, though somewhat beyond the scope of this research to analyze in depth, it may be noted that the teacher’s highest intelligence is interpersonal – theoretically suggesting that the teacher is more likely to actively incorporate feedback from others. Compare with figure 4.1.2 in which the teacher’s interpersonal intelligence is rated by both parties to be comparatively more average while intrapersonal is higher. This teacher generally kept her re-evaluation at similar levels as her own original scores as opposed to seeking the balance we saw in the first example.



Figure 4.1.2. Example of an Individual Teacher Profile (2)

For the more general analysis, and to aid explanation of the data in appendix 4.1, table 4.1 shows, by way of example,  the results for one teacher.  The initial raw figures are translated into percentages. Though this may negate somewhat the objective differences on the Likert scale, the percentages allow for easier comparison or analysis of the group as a whole, while leveling subjective differences between teachers and partners and between the teachers team as a whole .

Table 4.1. Example of data for an individual student



Once the percentages were calculated, subsequent calculations could then take place:

The percentage difference between teacher scores and partner scores for each intelligence (T-P) with negative scores made positive as the direction of difference is irrelevant)

The percentage difference between the teacher scores and the re-evaluation scores for each intelligence (T-R)

In both cases negative scores were made positive as the direction of difference was irrelevant (i.e. both a score of +5 or -5 show the same degree of difference)



Each of the above were then averaged for each individual, and then collated and averaged for the 12 teachers in the sample. (Table 4.2)

Table 4.2. Differences Between Teacher and Partner Scores, and Teacher Initial and Reevaluation scores

The average discrepancy between teacher scores and partner scores was therefore 1.8% per each of the nine intelligences, or 16.8% overall if multiplied by nine (intelligences). As such then there was an 83.2% correlation over the 108 different scores under comparison (12 individuals by 9 intelligences). This would suggest a strong tendency of reliability from an inter-observer perspective. The conceptual reliability is of course not assumed and this research only used the tool for what it was – an indicator not an actual measurement of an individual’s intelligence profile.

The average discrepancy between the teacher’s initial score and their re-assessment score was somewhat lower: 0.6% per intelligence or 5.4% overall. While this shows that the teachers, on average, settled more towards their original assessment than that of their partner, and while 5.4% is not insignificant, it would also suggest that using the indicator without the use of a partner is, for the purposes of a simple ‘indicated’ profile, still fairly useful. This is important in the context of a teacher wishing to give the indicator to a group of ‘exchange’ students who may not yet have an appropriately close friend or relative at hand to complete the test with. It also indicates the validity of the MIIs of the teachers who did not complete the test with a partner.

Not initially anticipated but emerging from the analysis of data, an additional indication of reliability is illustrated by figure 4.3

Figure 4.3. All Teacher MII Samples Averaged

The pie chart represents the average MII scores for the sample of the faculty from which 18 questionnaires in total were received (12 with partners, 6 without). As would be theoretically expected from a balanced indicator, each individual has a different profile containing strengths and weaknesses in different areas. Consequently, when the results are averaged over the whole group, the intelligences would be expected to display a high degree of equilibrium (rather than being skewed in a smaller group of intelligences), which in fact they do. Given that the sample is one of city dwelling English teachers I would suggest that the presumed equilibrium is inherent in the results: though as expected, the linguistic intelligence overall is highest at the expense perhaps of the logical-mathematical. The urban location perhaps impacts or reflects the lower naturalist score. Notably five of the nine intelligences provide the mode of 11%.

Figure 4.4 shows the results when students were given the test. In this case the sample is of 96 students.

Figure 4.4. All Student MII Samples Averaged

This chart is even less ‘peaked’ than the teacher faculty chart, which would be expected given that there is less specific group cohesion of interest or skill than the predictably linguistic English teachers, and a far wider range of geographical and cultural background. Naturalist is therefore on a par with logical, kinesthetic and linguistic. Given the main spread of the age group (16-22) it may also be unsurprising that the interpersonal is a little higher and existential a little lower than the other intelligences. Still, there is only a 6% difference between the lowest and highest intelligence, and, exactly in line with the faculty results, the mode is 11%.

Alternate Form Test: The Topic Survey



The challenge in presenting these particular results is how to correlate two completely different assessment tools. My solution was to lable each teacher’s intelligence results from the MII as follows:

Highest three intelligences =A

Middle three intelligences = B

Lowest three intelligences = C

These labels were then cross referenced with the teachers’ 3 most preferred (Table  4.3.) and 3 least preferred (Table 4.4.) choices from the topics survey. For example, in Table 4.3. two of  Teacher 1’s three most preferred topics, ‘Extreme sports’, and ‘Study Skills’ fall, respectively, in the intelligences of  kinesthetic and intrapersonal. As these are two of Teacher 1’s three highest intelligences on the MII, the boxes are marked with an ‘A’. Her other topic choice, ‘Social Skills’, falls under interpersonal intelligence which is one of her middle three intelligences, and so marked with a ‘B’.  In Table 4.4 (least preferred topics), two of Teacher 1’s choices are ‘Great Artists’ (Spatial) and ‘Musical Instruments’ (Musical), both of which are amongst her three lowest intelligences and so are marked with a ‘C’. Her other topic choice of ‘Dance’ represents one of her highest intelligences, kinesthetic, and therefore the box is marked with an ‘A’

 

In order for the Topic Questionnaire to affirm the reliability of the MII, it would be expected that Table 4.3: ‘Most preferred topics’ would comprise of mainly As and perhaps Bs (given the imprecision of the tool due to the wide range of potential domains within each intelligence), while Table 4.4: ‘Least preferred topics’ would comprise of mainly C’s and perhaps Bs.  This is in fact largely the case.



Table 4.3. Most preferred topics cross-referenced with MII results

Table 4.4 Least preferred topics cross referenced with MII results

The main flaw of the Topic  Survey is related to Gardner’s concept of ‘domains’ (see Chapter 1: Definition of Terms). For example, in Table 4.4, one of Teacher 1’s least preferred topics is ‘Dance’ which would fall under kinesthetic intelligence which is one of her highest intelligences. Unlike another of her kinesthetic choices ‘Extreme Sports’, the topic of dance, though a kinesthetic domain, it is not one that interests the teacher as perhaps Dance also relies on higher musicalproclivity. The survey presents therefore an over-simplified view of MIT. This perhaps accounts for the number of Bs and especially Cs in Table 4.3. and the number of As and Bs in Table 4.4

The second problematic area in question is that of existential intelligence in Table 4.3. This is the most polarized section with 7As, 1B and 5Cs, and comprises of 5 of the 7 Cs in the whole table. I would suggest that to an extent the topics I chose to represent existential intelligence are skewed. ‘Great philosophers’, ‘World religions’ and ‘Good and Evil’, I now realize, are rather extreme examples of how existential intelligence may be reached via topic in the ESL classroom. More balanced descriptors may have been ‘ethical dilemas’ ‘health and society’, and ‘future what ifs’. Such titles, through being less emotive, may have been less appealing to the less existential type than ‘world religions’ and ‘good and evil’.

Despite such possible skewing, the general tendencies are as predicted. Figure 4.5 demonstrates this tendency for correlation with the MII.

Figure 4.5 Tendency of Correlation of the Multiple Intelligences Indicator and the Topic Survey

Alternate Form Test: The Activity Survey

The final strategy to assess the reliability of the MII was the classroom activity survey (see appendix 4) Figure 4.6 shows the results when added together across the faculty. The proximity of totals to 100 is coincidental and does not represent a percentage.

Figure 4.6 Results of the Activity Survey



When the activities from the ‘used and like’ and the ‘not used but would like’ categories are taken alone and compared to the results from the MII, a degree of correlation is evident (these results appear as percentages):

Figure 4.7. Comparison of Activity Survey with Averaged MII Scores

Though not an exact correlation, the ‘liked and used’ activities differ markedly from the averaged MII results in only kinesthetic and interpersonal categories. With the leveling addition of ‘liked and not used’ activities we see a strong similarity of mode to the MII: 7 of the 9 intelligences scored either 11% or 12%, the MII mode was 11%.

Given the complete difference in alternate format of the MII and the activities survey, I would suggest that these similarities indicate a reasonable degree of correlation.

However, given subsequent findings and also an increased grasp of the application of the theory to classroom reality, I have some reservations about the survey questionnaire.

This reservation is based on syntactical concern. The findings are based on ‘activities used’. This does not give a balance to the frequency of such activities being used, only that, at some point, they have been used. This does not distinguish between an activity used once and an activity used daily. However, while this issue would spur the next phase of the research project, the aim of both the activity and topic surveys was to provide evidence of a tendencyof correlation with the MII. The results, particularly of the activity survey are actually closer than I had anticipated.

Open Badges – Giving As Good As Receiving.

One of the less mentioned but absolutely integral aspects of the Open Badges concept is not simply that it rewards learners, or rather ‘earners’ for their ‘non-formal’ learning endeavors, but that it rewards the ‘issuer’ that is the organization or business that creates the course of learning leading to the award of the badge. This is huge.

For the individual, learner badges mean a clear and transparent award structure to openly articulate forms of training and learning undertaken in the workplace or through volunteer organizations.

For the business it not only provides a way of motivating and rewarding learning for their staff, it creates a channel through which they can virally market their commitment to staff development, and to link their brand to the people who have taken their training — people that others are watching and learning from.

Once people start displaying badges carrying a certain business name, the brand becomes increasingly known for what they both offer and expect of their staff in terms of skills and knowledge, the professional standards and commitment to learning they have as an organization, and the the impact they have on their industry and cause.

Recruitment will soon become not just about the qualifications that are expected, but the qualifications that are offered.

Pages