Learning to Adapt #2

A follow-up to my recent L-to-A #1 post:

Part II

In a follow-up to my recent post, I want to discuss an often neglected element of online instruction; likely the most critical.

Let’s be honest, in the debate as to whether “online works,” MOOCs confused people. For those new to online, those (faculty) considering for the first time now that someone other than (for-profit/insignificant/not proper school) was doing it, they were the easiest online classes to access, and initially focused almost exclusively on slick content presentation. When people assessed MOOCs and then came back to assess (normal? traditional? retention-emphasized? credit-bearing) online courses they tend to ask for access to the course materials sans instructor or student activity. I’m yet to determine if this is merely an oversight, deference to the beast that is FERPA, or collegial respect for the sanctity of the classroom…

My team at Northeastern has experience gathered over more than 150 years working in and around online courses (there are 14 of us). The range of experience includes time at most of the leading online educators over the last one and a half decades. Even a quick sweep from my mental inventory brings up Drexel, Kaplan, UMass, SNHU, UNH, Capella, Boston University, Harvard / Harvard Extension and RISD. During our time together at NU, we have implemented the work of leading theorists in learning, and cognitive science, assessment, usability, self-efficacy, information architecture, multimedia design, graphics and as I write / you read, are experimenting with gamification and the challenge of intrinsically motivating underserved populations.

The work of people like Dick Clark (USC) and Bror Saxburg (Kaplan) has provided us with depth and detail on the evidence-based learning science behind a lot of our development work. We feel that we have clarity on what works in terms of content formatting (length of chunks, organization of media elements etc.) and of some behavioral elements – the importance of frequent, timely corrective feedback, the level of challenge required to keep students engaged. My team works directly with the full-time faculty trying to build in elements that approximate the type of learning that we have called Online Experiential Learning. Authentic, tangible experiences with opportunities to spiral back, reflect, abstract and re-apply case-based learning in a wider, retained for life context. We focus on, and sweat over, materials, formats, fonts, activities and assessments.

The feedback we receive includes: “I don’t care for the fonts” “or “I took a MOOC last week and their videos were really cool… can’t we do that?”
Colleagues who worked with me at smaller institutions (with even smaller budgets) will snigger at this but my budget is less than a tenth of what many MOOC providers have quoted us for production costs. That in itself may be misleading and counter to my point here (yes, I do have one…)… I have reviewed and developed courses that were superb, and superbly appreciated by demanding students who called them life altering. The reactions or conversely complaints rarely if ever center on the content. At one of my former institutions where we tracked student grievances we registered 4% of student complaints that were content-related; 96% focused on non-content concerns (read on…)

At a recent Bill and Melinda-Gates Institute hosted event I heard students respond to the ever-present question “Does online work?” or “Did it work for you?” Some replied enthusiastically and positively, others with quite definitive “No”s / “It was terrible”-type comments. This begs the key question: “What exactly did they hate?” Can you guess???? – the materials? the fonts? the quality of the videos? No, no and no. The comments fell into three consistent buckets:

“The online class was terrible because I got no feedback on my work”
“I didn’t ever really know how I was doing”
“The instructor was M.I.A.”

In other words, and either depressingly or reassuringly, depending on your perspective the juxtaposition of images and text as advocated by the learning science was not pivotal??? Hmm, OK – so here’s a question that I would pose to would-be online students: You have the choice of a great teacher with crappy materials or an absent / crappy teacher with great materials – which would you choose?

Students who self-select for an online class are in my experience, tolerant of technical glitches and they don’t really care if a video has the instructor in pajamas in front of family pictures in a poorly lit room. A responsive, attentive, responsive, empathetic, responsive, caring but challenging, responsive instructor more than offsets the fact that the video is not green screened so that (s)he appears to be in front of the Hanging Gardens of Babylon, nor because Krakatoa’s not erupting right now* Their attention has gone after 6 minutes anyway, they don’t care whether “herds of Wildebeest are sweeping majestically across the plain”….

So, to summarize, we assess our collective sweat and tears in the realms of Instructional Design, Learning Science, Materials development, web usability, information architecture, Cognitive Science, Assessment specialists and Graphic art and are then judged by whether the instructor shows up for class or not? – Really?!

Imagine the same criteria for traditional classes: –

Q – “How was your face to face class today?”
A – “Well the instructor didn’t show so we all kind of read the books without direction and chatted amongst ourselves, but that’s fine, it was a great class. I love face-to-face classes.

Here’s my money-shot statement:

A class where the instructor does not devote energy and attention providing presence and guidance is not a failed class, it is a failed instructor.
It is not a rationale for concluding that a means of reaching hundreds of thousands of learners for whom face to face is not an option just doesn’t work.

To be clear, my intention in penning this is not at all to diss overstretched instructors who have themselves not self-selected to teach (see earlier comment on self-selected students). Teaching online is extremely different, not suited to all, can be learned but needs to be embraced (or at least a little bit hugged).

Who make the best online instructors?
Here’s a low-tech answer first:
• Jigsaw puzzlers who want to do 5-6 pieces at breakfast, coffee break, lunch, tea break and an hour or so before bed.
• Slightly obsessive gardeners who feel the need to check in on their tomato plants 4 times a day sometimes just to say “hi!”
A higher tech answer for 2014
• Committed e-bay-ers,
• Social media users (even moderate – parents / grandparents accessing Facebook to see progeny pics),
• Anyone who has ever felt the need to Advise TripAdvisor, then gone back to see if others rate their comments.
• Someone incentivized, by some inner passion, who gets a kick out of nudging things along incrementally. Someone who is a little compulsive and doesn’t like to think of a book misaligned on a shelf. Someone who has bought into the idea that they can have influence (on tomatoes or on travelers). Someone who gets a teeny bit jazzed at the thought that they could be just helping make the world a slightly more informed place, affecting or maybe even changing lives – sounds like a heck of a lot of the teachers I know.

It IS a transition though; teaching and changing lives in fifteen-minute increments, rather than through three-hour classroom performances between grading marathons.

Extending my not-great metaphors even further; does anyone garden in a half-assed manner so that they can prove that gardening doesn’t work? Does anyone eBay because they hate the whole system, which is an online manifestation of the capitalist marketplace, and vent when they sell (or buy) things?

I quite miss writing postcards when I travel, I was quite known for them back in the day (OK, I am old), but it is kind of cool now that I can let 20 times as many people know that I’m in a very cool (or hot) spot and also that they know before the week’s out. Change makes things different. If anyone is so wedded to the traditional that they can’t move – that’s fine. I remember hearing of an instructor at another of my former colleges who eschewed the phone as it was too new-fangled and he needed to see whites of eyes.
I get that. I miss things too. Instructors who dislike or distrust (whatever the rationale) “class” too much to show up should not be given online teaching assignments (surely). Those who are a fit and who get it in its slightly compulsive glory (eBayers, Facebookers, TripAdvisors, / Jigsawers, Tomato-growers or book-aligners) should be supported and cherished. Not every personality makes a good face-to-face instructor, not every personality makes a good online instructor. There is the choice; change, adapt, give it a genuine try, or (simply) don’t take the assignment

My job and the job of the Instructional Designers I work with should be supporting great instruction and genuine effort with appropriate spaces and backdrops for learning to happen. My job can’t be developing materials that substitute for instructors who don’t want to be there and don’t show up for class.

I know that academia is a big ship to turnaround but I wish there was a way to convince instructors that the most important thing in an online class IS STILL YOU. If people like me do our jobs well, we can automate some parts – but PLEASE work with us – we might even be able to take away the boring, dull parts that you don’t like doing. For example:
• Answering questions that you have answered 5000 times before (boom! – a FAQ),
• Reminding people that assignments are due (boom ! (again) – Calendar),
• Developing a working understanding of basic, underpinning knowledge (chunked content and Check Your Knowledge self-checks)
• Being there 24/7, answering every question (let us show you scaffolded, supported peer-to-peer interaction).
I now pronounce you FREE to only have to focus on questions that are stimulating, that allow you to demonstrate and indulge your passion for your subject and engage (disproportionately) through interactions that are significant and (could) change lives. I wrote an earlier blog post on this two years ago titled Disrupted Faculty Roles

In summary:
An online class that is poor because the instructor didn’t show up is

    a poor class led by a delinquent instructor.

If an institution does nothing about it, turns away from the data that demonstrates it or deflects blame towards materials that aren’t as cool as the latest MOOC, shame on it/them/us.

Online works for a lot of people when planets align and people work together. The instructor’s responsiveness should be close to the top, rather than near the bottom of the list of requirements.

As I said, work with us, then show up for class… in pajamas, eating tomatoes.


  • Note    * obscure scenic references courtesy of classic John Cleese – Fawlty Towers BBC

Learning to Adapt #1

– reflections on the Gates’ grantee gathering on Adaptive Learning – Seattle June 2014

Fresh off the trip to the right hand coast of the United States, brought to you courtesy of the sleep-precluding, screaming child in seat 10A, here are some reflections on the “adaptive learning providers and implementers” session hosted in Seattle June 25-27th by the Gates foundation. In a wonderfully choreographed event, open discussion was held, not into whether “adaptive learning is the golden bullet” or “How many students we have saved so far” rather along the lines of “What are we seeing? / What are the issues? and What are the next steps we might take?”
The collegiality between (supposedly) rabid, competing vendors was constructive – there is the feeling that we have moved beyond a zero-sum game and that given the massive challenges, there is no one-size-fits all solution. Being in Seattle-n-all (close to friends at BGI) I am reminded of Robert Socolow’s 2004 “wedges” that he, apparently, revisited in 2011. His work suggested that it will only be possible to avoid climate change / global warming if people combine solutions; look for more efficiency, employ renewable power sources, ban Hummers – that kind of thing.
My first meta-conclusion is that while we are lumping pretty much anything under the “Adaptive” banner, there are a variety of approaches. It appears that no two vendors are approaching this challenge in exactly the same way. There are providers whose platforms provide mnemonic cues to prompt student retention of information, virtual lab / simulation providers, personalization tools, micro-adaptive and macro-adaptive systems, elements of gamification and just plain rich content. None are without value; none will in isolation solve all issues in higher ed., online education.
It may even be counter-productive to over-define. Irrespective of how it gets there (with adaptive, without), what might be helpful would be a solid matrix or rubric that can assess course intrinsic motivation or stickiness. An engagement matrix could measure how likely a course, in and of itself, is to keep student attention. Cognitive science can provide a lot of the guidance and grounding for this. Is the text appropriately ‘chunked?’ do graphics or multimedia support the text or distract from it? And is immediate, corrective feedback being provided for the students to guide and encourage them?
The realm of gamification has the best language to frame this metric. While he will hate me for saying this, my friend Dr. Dick Clark’s work, and that of his peers, compadres and acolytes has a substantial Venn (diagram) overlap with the language of Karl Kapp, Mihalyi Cszikscentmihayi and even Jesse Schell who all talk about engagement and “hooks” to keep ‘subjects’ (gamers, athletes, employees, and (why not) students) – fixated and encouraged to keep going, keep failing, persevering to reach the next level, or to nudge ahead of your friend on the leaderboard – not one solution, one size. Implement, measure, try, tweak. Be flexible but gather data on what evidence tells us.
We might have to think about re-naming this work, given how confused and polarized people are by the term and the concept of “gamification.” Like saying “Voldemort” aloud, “Beetlejuice” three times or channeling Wilde’s “love that dare not speak its name,” “Gamification” both inflates the balloon and lets the air out of the room at the same time. We need a new term; so here is a matrix that indicates parameters by which one can measure a course’s potential degree of engagement or stickiness. Insert your own Likert scale across the X-axis, here are my Y’s for engaging (gamified?) content. The course presents with:

1. (Simple) rules for student participation.
2. Clear goals.
3. Appropriate level of challenge – one requires concentration.
4. Peer engagement (cooperation or competition or both)
5. Immediate, corrective feedback.
6. A narrative of some sort – can be prescribed by instructor or developed by the students (collectively or individually)
7. An aesthetic theme – can be retro, can be fantasy, or can be personal to each student.
8. Reduced fear of failure – encouragement to “have a go” and learn as you go.
9. A sense of user control (“my choices”)
10. The game is intrinsically woven into the learning (not bolted on artificially)

These criteria are pulled from commonalities presented in the work of Karl Kapp, Mihaly Csikszentmihalyi, Roger Callois and others in this field. Adaptive learning certainly helps with a number of these elements; depending on the platform, they can raise a score on (1), (2), (3), (5), maybe (7), definitely (8) and (9). This is the power of AL – it accentuates elements that make courses engaging or sticky, reducing my list of ten to a more achievable list of 3-4 remaining challenges that we can perhaps ask AL providers to add, or we can add on ourselves.

Analysis of what works in engaging people with game environments suggests that well-implemented adaptive learning courses can move us toward a goal of increased student engagement (with course materials) that evidence indicates will enhance their learning leading to positive outcomes (retention, completion).

Conversation developed at the Gates’ session that surfaced another major issue that I believe we are all AVOIDING. In my Part 2 reflection I will discuss an issue that is absolutely pivotal to online success as a learning medium. More to follow after I catch up on sleep deprivation. Be warned.

Angry Birds – again

Clearly I am connecting with Mindflash today on many levels. David Kelly has a great post on What Angry Birds can tell us about Instructional Design. If you only have 1 minute to skim his paragraph headings do so – I agree 100% – as evidence see my many posts and the fact that many colleagues roll their eyes as I have discussed gamification (game principles rather than simulations) one too many times over the last couple of years…

My earlier posts on this subject:
January 2011 –  Game theory applied to online
Gamification January 2012

 

BTW – assuming he’s not THIS David Kelly – although that would be awesome !

The Primary Challenge for the OER Movement

David Wiley recently posted an article on the challenge of assessment in the OER world. http://opencontent.org/blog/archives/2042
It certainly does seem to be a challenge – we (SNHU Innovation team) spent time with ETS at their Higher Ed Advisory Council last week in San Diego where we had some great break-out discussions around standardized testing. It was a great session; they have some VERY smart employees in Princeton (special mentions for Ross, Patrick, Kate and David) and they convened a very interesting group of academics.

The current assessment choice for those of us working in the OER space seems to be:

  • on one hand, multiple choice / self-checks – with no concrete feedback from humans (many OER courses have these included)
  • on the other –  blog / journal reviews which are time-consuming (hence questionable given scaling aspirations),  subjective, organization-specific, open to inflation, bias and inconsistent leveling.

I appreciated the Vision Project’s* working group March 2011 report on Student Learning Outcomes and Assessment which I think frames the issue very clearly (the bold is my highlight)

If colleges and universities … are to devise a system for assessing the learning outcomes of their undergraduates in a way that allows for comparability, transparency, and accountability, we must agree on some of the qualities of an undergraduate education that we all expect our students to possess. At the same time, those qualities we agree on must allow us to preserve the unique missions of individual colleges, appropriate measures of privacy around assessment work, and an ability to actually improve teaching and learning with the results that we find.
Research and literature on sound assessment practice is clear that no single instrument or approach to assessing learning can meet all of the challenges and notes that the most effective systemic models of assessment offer multiple measures of student learning in a triangulation approach. The most effective systemic models of assessment offer multiple measures of student learning in a “triangulation” approach that includes indirect assessments such as surveys, direct assessments like tests, and embedded assessments such as classroom assignments.

This notion of triangulation seems viable – mixing in institutional (mission-related) emphases, with quick turnaround self-checks. The missing (third) element is the industry-standard independent test. In some disciplines – Project Management (PMI), IT (Microsoft), HR (PHR, SPHR) there are clear standards that can be applied. There is certainly a window of opportunity for someone like ETS to take a lead on this, if they can develop the adaptability of development and pricing that we, and other college partners, would likely need. I hope that as colleges free up Instructional Design time that they would typically have been spending making *another* version of PSYCH101 content (which is freely available and wonderful at Saylor.org), they spend more time on developing key benchmarks for assessment that can become more widely disseminated.
Assessment is indeed the golden bullet for this work. Ideally it’s fun too.

* The Working Group on Student Learning Outcomes and Assessment (WGSLOA) was established by Richard Freeland, Commissioner of Higher Education, in late fall 2009 in anticipation of the Vision Project, a bold effort to set a public agenda for higher education and commit public campuses and the Department/Board of Higher Education to producing nationally leading educational results at a time when the need for well-educated citizens and a well- prepared workforce is critical for the future of the Commonwealth.

Gamification

.. we include that word in our upcoming grant proposal application and I fear that it equates for most people to either clickers, or making virtual reality versions of lessons where people play a game representing the real world. Like NASA flight trainers that don’t involve people getting killed.

Where we employ “Gamification”, I/we are referring to the principles that are intrinsic to good, addictive games where people (think: students who cannot focus on traditional materials for more than 2 minutes), spend hours persevering and are motivated enough to achieve goals that seem at first, almost impossible.
I’m mid-way (no game company pun intended) through “What Video Games Have to Teach us About Learning and Literacy” by James Paul Gee – published 2003.
I think I can best capture the key principles with a few quotes that I will not damage by paraphrasing:

“learning is or should be both frustrating and life enhancing. The key is finding ways to make hard things life enhancing so that people keep going and don’t fall back on learning and thinking only what is simple and easy.”  Gee talks at length of the Semiotic Domains; “sets of practices that recruit one or more modalities (e.g., oral or written languages, images, equations, symbols, sounds, gestures, graphs, artifacts, etc.) to communicate distinctive types of meanings”.  He stresses the need to place learning actively and contextually in a number of these domains because in doing so, three results come about:

  1. We learn to experience (see, feel and operate on) the world in new ways.
  2. Since semiotic domains usually are shared by groups of people who carry them on as distinctive social practices, we gain the potential to join this social group, to become affiliated with such kinds of people (even though we may never see any of them face to face.)
  3. We gain resources that prepare us for future learning and problem solving in the domain and, perhaps more important, in related domains.

He continues: “Learning in any semiotic domain crucially involves learning how to situate (or build) meanings for that domain in the sorts of situations the domain involves”

One particular domain, which we in H.E. often try to approximate as “Real World” or “Authentic” (as in Authentic Assessment), Gee calls, “Lifeworld”
He offers; “Helping students learn how to think about the contrasting claims of various specialists against each other and against lifeworld claims ought to be a key job for schools”
He then concludes: “I believe it is crucial, particularly in the contemporary world, that all of us, regardless of our cultural affiliations, be able to operate in a wide variety of semiotic domains outside our lifeworld domain”  – which sounds like a solid argument for a diverse liberal arts education if ever I heard one.

One final aspect that game designers nail where academic Instructional Designers have a way to go is the acceptance of, and lack of discouragement that, “failing” engenders;
“When the character you are playing dies in a video game, you can get sad and upset, but you also usually get “pissed” that you (the player) have failed. And then you start again, usually from a saved game, motivated to do better”

Some more key tenets known by game designers, not really given enough thought by (Academic)  Instructional designers:

  • The learner must be enticed to try, even if he or she already has good grounds to be afraid to try
  • The learner must be enticed to put in lots of effort even if he or she begins with little motivation to do so
  • The learner must achieve some meaningful success when he or she has expended this effort

Wouldn’t it be great if we in Higher Ed could develop a product that replaces the words “Good video games” with something like “Good courses” or “Great coursework”:

Good video games give players better and deeper rewards as (and if) they continue to learn new things as they play (or replay) the game.
In Good video games, students are challenged to “think about the routinized mastery they have achieved and to undo this routinization to achieve a new higher level of skill”

Education as addictive as World of Warcraft… wouldn’t that be great!

 

AND – Pearson try to pursue this concept with Alleyoop – saw this the day after I blogged !