A colleague of mine sent the following Inside Higher Ed article this morning,entitled Without Credit it speaks to the search for a viable model to generate revenue out of MOOCs

In response I mentioned that I’m claim the phrase “MOOC-washing” (for disruptive Education wannabes) which so reminds me of Greenwashing in the sustainability movement 10-15 years ago..

– The article merely demonstrates that there IS no real game changer until someone works out a revenue model that is neither (100,0000 enrollments  x free) nor this model of  an “Enhanced MOOC” – Emperor’s new clothes anyone – guess what ?- It’s an online class  for credit costing $300-$400-$500 per credit. Known in some circles as traditional online.

The entrenched, perceived value of the Credit as THE proxy for learning is the real brake on much of this innovation.

There are two key possibilities regarding the CREDIT and it’s centrality to all things.

  • Option 1 – Alternative to credits (Certificates / Badges etc) – these will only succeed if there is some recognized norming or development of an Industry Standard. Something tangible that employers will recognize as currency – this is a long way off in my opinion
  • Option 2 – completely decouple competencies from credits and have all students forced to “show and tell” competencies in a very Open format that proves to employers (undeniably) their employ-ability – this would likely be portfolio or third party standard testing (or a blend)

Given that Option 1 is glacial and outside of anyone’s clear control, I vote Option 2 as the viable game-changer within the next 18 months or so. There will be issues – lingering vestiges of “seat-time” although most people seem to be beyond that now, and the need for collaboration between faculty / departments to get them to agree on what ARE core competencies and how they can be demonstrated.

This model may work better at lower levels (associates rather than Graduate), but I believe that with fresh set of eyes and open rather than turf-war mindsets, we could really produce something innovative and truly disruptive. I LOVE MOOCs but they will not transform Higher Ed. “Enhanced MOOCs” sounds like an attempt to be “down with the kids” without actually doing much of anything innovative at all.

Let’s think outside the box, blow it all up and start again – just pretend you’ve never heard of CREDITS…


Angry Birds – again

Clearly I am connecting with Mindflash today on many levels. David Kelly has a great post on What Angry Birds can tell us about Instructional Design. If you only have 1 minute to skim his paragraph headings do so – I agree 100% – as evidence see my many posts and the fact that many colleagues roll their eyes as I have discussed gamification (game principles rather than simulations) one too many times over the last couple of years…

My earlier posts on this subject:
January 2011 –  Game theory applied to online
Gamification January 2012


BTW – assuming he’s not THIS David Kelly – although that would be awesome !

The Primary Challenge for the OER Movement

David Wiley recently posted an article on the challenge of assessment in the OER world.
It certainly does seem to be a challenge – we (SNHU Innovation team) spent time with ETS at their Higher Ed Advisory Council last week in San Diego where we had some great break-out discussions around standardized testing. It was a great session; they have some VERY smart employees in Princeton (special mentions for Ross, Patrick, Kate and David) and they convened a very interesting group of academics.

The current assessment choice for those of us working in the OER space seems to be:

  • on one hand, multiple choice / self-checks – with no concrete feedback from humans (many OER courses have these included)
  • on the other –  blog / journal reviews which are time-consuming (hence questionable given scaling aspirations),  subjective, organization-specific, open to inflation, bias and inconsistent leveling.

I appreciated the Vision Project’s* working group March 2011 report on Student Learning Outcomes and Assessment which I think frames the issue very clearly (the bold is my highlight)

If colleges and universities … are to devise a system for assessing the learning outcomes of their undergraduates in a way that allows for comparability, transparency, and accountability, we must agree on some of the qualities of an undergraduate education that we all expect our students to possess. At the same time, those qualities we agree on must allow us to preserve the unique missions of individual colleges, appropriate measures of privacy around assessment work, and an ability to actually improve teaching and learning with the results that we find.
Research and literature on sound assessment practice is clear that no single instrument or approach to assessing learning can meet all of the challenges and notes that the most effective systemic models of assessment offer multiple measures of student learning in a triangulation approach. The most effective systemic models of assessment offer multiple measures of student learning in a “triangulation” approach that includes indirect assessments such as surveys, direct assessments like tests, and embedded assessments such as classroom assignments.

This notion of triangulation seems viable – mixing in institutional (mission-related) emphases, with quick turnaround self-checks. The missing (third) element is the industry-standard independent test. In some disciplines – Project Management (PMI), IT (Microsoft), HR (PHR, SPHR) there are clear standards that can be applied. There is certainly a window of opportunity for someone like ETS to take a lead on this, if they can develop the adaptability of development and pricing that we, and other college partners, would likely need. I hope that as colleges free up Instructional Design time that they would typically have been spending making *another* version of PSYCH101 content (which is freely available and wonderful at, they spend more time on developing key benchmarks for assessment that can become more widely disseminated.
Assessment is indeed the golden bullet for this work. Ideally it’s fun too.

* The Working Group on Student Learning Outcomes and Assessment (WGSLOA) was established by Richard Freeland, Commissioner of Higher Education, in late fall 2009 in anticipation of the Vision Project, a bold effort to set a public agenda for higher education and commit public campuses and the Department/Board of Higher Education to producing nationally leading educational results at a time when the need for well-educated citizens and a well- prepared workforce is critical for the future of the Commonwealth.

Psychometrics and Crowd Wisdom

We hosted Preetha Ram, co-Founder of Open Study for sessions that we split into two audiences – a core academic group on our main campus and an instructional design / enrolment / student services group at our millyard COCE campus.

One good indicator of a solid product is when distinct audiences and non-believers get enthralled. The sign of a spectacular product is when someone who has seen this show before, (me) and already was a fan, got to see the continued/continuing evolution of both the product and its potential.

Open Study people are working on the back end to review what might be gleaned from a working group of 100,000+ crowd wisdom generators. This is taking them beyond what they’ve had for a while: 24/7 student support, community and game intrinsic motivation (“stickiness”), to learning analytics and demonstration of competencies among their user group.

All competency-based education systems (WGU, P2PU, Excelsior, MITx) need to continually focus on the importance of the “how do we know they have learned?” question. Open Study participants who are answering hundreds (or thousands) of math questions in supportive and constructive ways are not just displaying math ability. They are demonstrating effective non-cognitive skills in tandem with cognitive; domain expertise in conjunction with tangible skills. Analysis and demonstration of specific user correlations as Preetha described might just add up to successful psychometric testing.

With the data they have at their fingertips, Open Study has the potential to track not only teamwork, helpfulness and engagement, as they do now, but also to extend to LEAP / Institute for the Future key skills like problem solving, critical thinking and creativity. I was delighted to witness the immediate engagement of many in the academic session (esp. Kim Bogle – chair of the assessment committee at SNHU, and Mark McQuillan –wonderful – new Dean of School of Ed) discussing how data and metrics might be mined to demonstrate competencies.

It was exciting to sample this “esprit de corps” among true educational entrepreneurs, eager to respond to the genuine needs of students.
Thanks to all who participated.


.. we include that word in our upcoming grant proposal application and I fear that it equates for most people to either clickers, or making virtual reality versions of lessons where people play a game representing the real world. Like NASA flight trainers that don’t involve people getting killed.

Where we employ “Gamification”, I/we are referring to the principles that are intrinsic to good, addictive games where people (think: students who cannot focus on traditional materials for more than 2 minutes), spend hours persevering and are motivated enough to achieve goals that seem at first, almost impossible.
I’m mid-way (no game company pun intended) through “What Video Games Have to Teach us About Learning and Literacy” by James Paul Gee – published 2003.
I think I can best capture the key principles with a few quotes that I will not damage by paraphrasing:

“learning is or should be both frustrating and life enhancing. The key is finding ways to make hard things life enhancing so that people keep going and don’t fall back on learning and thinking only what is simple and easy.”  Gee talks at length of the Semiotic Domains; “sets of practices that recruit one or more modalities (e.g., oral or written languages, images, equations, symbols, sounds, gestures, graphs, artifacts, etc.) to communicate distinctive types of meanings”.  He stresses the need to place learning actively and contextually in a number of these domains because in doing so, three results come about:

  1. We learn to experience (see, feel and operate on) the world in new ways.
  2. Since semiotic domains usually are shared by groups of people who carry them on as distinctive social practices, we gain the potential to join this social group, to become affiliated with such kinds of people (even though we may never see any of them face to face.)
  3. We gain resources that prepare us for future learning and problem solving in the domain and, perhaps more important, in related domains.

He continues: “Learning in any semiotic domain crucially involves learning how to situate (or build) meanings for that domain in the sorts of situations the domain involves”

One particular domain, which we in H.E. often try to approximate as “Real World” or “Authentic” (as in Authentic Assessment), Gee calls, “Lifeworld”
He offers; “Helping students learn how to think about the contrasting claims of various specialists against each other and against lifeworld claims ought to be a key job for schools”
He then concludes: “I believe it is crucial, particularly in the contemporary world, that all of us, regardless of our cultural affiliations, be able to operate in a wide variety of semiotic domains outside our lifeworld domain”  – which sounds like a solid argument for a diverse liberal arts education if ever I heard one.

One final aspect that game designers nail where academic Instructional Designers have a way to go is the acceptance of, and lack of discouragement that, “failing” engenders;
“When the character you are playing dies in a video game, you can get sad and upset, but you also usually get “pissed” that you (the player) have failed. And then you start again, usually from a saved game, motivated to do better”

Some more key tenets known by game designers, not really given enough thought by (Academic)  Instructional designers:

  • The learner must be enticed to try, even if he or she already has good grounds to be afraid to try
  • The learner must be enticed to put in lots of effort even if he or she begins with little motivation to do so
  • The learner must achieve some meaningful success when he or she has expended this effort

Wouldn’t it be great if we in Higher Ed could develop a product that replaces the words “Good video games” with something like “Good courses” or “Great coursework”:

Good video games give players better and deeper rewards as (and if) they continue to learn new things as they play (or replay) the game.
In Good video games, students are challenged to “think about the routinized mastery they have achieved and to undo this routinization to achieve a new higher level of skill”

Education as addictive as World of Warcraft… wouldn’t that be great!


AND – Pearson try to pursue this concept with Alleyoop – saw this the day after I blogged !

Game Theory applied to Online

I’ve been reviewing for a few weeks now, our online (Blackboard) courses at SNHU – and thinking of ways to make participation more meaningful. I hadn’t managed to get further than describing it as an Angry Birds model – where increasingly difficult screens become available and you spend longer and longer (being more engaged), as you progress and get “into it.”

I found that by editing a GC magazine article, I could illustrate where I think we should be looking for better engagement with course materials and content:

GC December 2010 – “Now that the social layer has been built, some people say, the next layer will be the game layer….
EDITING this slightly allows me to present:  GAME THEORY APPLIED TO EDUCATION – TEACHING AND LEARNING: my italics added…

“Now that the social layer has been layered into education, some people say, the next layer will be the game layer….
With the theory of game design applied to an online course, you want a curve like this: increasingly large payoffs (points towards grades or perhaps initially just appreciation and comment from instructor / peers) at random but increasingly spaced intervals. So the first payoff is very small (great post Johnnie – you have really grasped this concept), and the next payoff is a little bigger, (10/10 for that short paper Wendy) and the next one … To begin with you get a payoff for one out of five actions, then it’s one out of twenty, then it’s one out of fifty (Final paper was a bear but I “ACHIEVED IT” / “cleared that level!”) – but those intervals have to be random. That is the key to human addiction.
Game mechanics mostly means overtly turning thingies into games. Even without overt game mechanics, the gamelike nature of social interaction is why we’re addicted to social media in general. It’s why I’m addicted to email (posting to the discussion boards / interacting with the class). Most emails Many posts I get (read) are bullshit (not hugely helpful or insightful). But every once in a while, I get an email read a post that feels affirming in some special way – etc etc etc…  Ergo I check my email the discussion boards / work on my classwork about every twenty-seven seconds. You can’t blame the Internet for the flaw that we’re basically crackheads. Our bodies are designed with the flaw of wanting to be crackheads. And manipulating that flaw is partly how you get 500 million users for your thingy a hugely engaged and engaging learning experience….

Too simplistic??