The Primary Challenge for the OER Movement

David Wiley recently posted an article on the challenge of assessment in the OER world. http://opencontent.org/blog/archives/2042
It certainly does seem to be a challenge – we (SNHU Innovation team) spent time with ETS at their Higher Ed Advisory Council last week in San Diego where we had some great break-out discussions around standardized testing. It was a great session; they have some VERY smart employees in Princeton (special mentions for Ross, Patrick, Kate and David) and they convened a very interesting group of academics.

The current assessment choice for those of us working in the OER space seems to be:

  • on one hand, multiple choice / self-checks – with no concrete feedback from humans (many OER courses have these included)
  • on the other –  blog / journal reviews which are time-consuming (hence questionable given scaling aspirations),  subjective, organization-specific, open to inflation, bias and inconsistent leveling.

I appreciated the Vision Project’s* working group March 2011 report on Student Learning Outcomes and Assessment which I think frames the issue very clearly (the bold is my highlight)

If colleges and universities … are to devise a system for assessing the learning outcomes of their undergraduates in a way that allows for comparability, transparency, and accountability, we must agree on some of the qualities of an undergraduate education that we all expect our students to possess. At the same time, those qualities we agree on must allow us to preserve the unique missions of individual colleges, appropriate measures of privacy around assessment work, and an ability to actually improve teaching and learning with the results that we find.
Research and literature on sound assessment practice is clear that no single instrument or approach to assessing learning can meet all of the challenges and notes that the most effective systemic models of assessment offer multiple measures of student learning in a triangulation approach. The most effective systemic models of assessment offer multiple measures of student learning in a “triangulation” approach that includes indirect assessments such as surveys, direct assessments like tests, and embedded assessments such as classroom assignments.

This notion of triangulation seems viable – mixing in institutional (mission-related) emphases, with quick turnaround self-checks. The missing (third) element is the industry-standard independent test. In some disciplines – Project Management (PMI), IT (Microsoft), HR (PHR, SPHR) there are clear standards that can be applied. There is certainly a window of opportunity for someone like ETS to take a lead on this, if they can develop the adaptability of development and pricing that we, and other college partners, would likely need. I hope that as colleges free up Instructional Design time that they would typically have been spending making *another* version of PSYCH101 content (which is freely available and wonderful at Saylor.org), they spend more time on developing key benchmarks for assessment that can become more widely disseminated.
Assessment is indeed the golden bullet for this work. Ideally it’s fun too.

* The Working Group on Student Learning Outcomes and Assessment (WGSLOA) was established by Richard Freeland, Commissioner of Higher Education, in late fall 2009 in anticipation of the Vision Project, a bold effort to set a public agenda for higher education and commit public campuses and the Department/Board of Higher Education to producing nationally leading educational results at a time when the need for well-educated citizens and a well- prepared workforce is critical for the future of the Commonwealth.
Advertisements

Psychometrics and Crowd Wisdom

We hosted Preetha Ram, co-Founder of Open Study for sessions that we split into two audiences – a core academic group on our main campus and an instructional design / enrolment / student services group at our millyard COCE campus.

One good indicator of a solid product is when distinct audiences and non-believers get enthralled. The sign of a spectacular product is when someone who has seen this show before, (me) and already was a fan, got to see the continued/continuing evolution of both the product and its potential.

Open Study people are working on the back end to review what might be gleaned from a working group of 100,000+ crowd wisdom generators. This is taking them beyond what they’ve had for a while: 24/7 student support, community and game intrinsic motivation (“stickiness”), to learning analytics and demonstration of competencies among their user group.

All competency-based education systems (WGU, P2PU, Excelsior, MITx) need to continually focus on the importance of the “how do we know they have learned?” question. Open Study participants who are answering hundreds (or thousands) of math questions in supportive and constructive ways are not just displaying math ability. They are demonstrating effective non-cognitive skills in tandem with cognitive; domain expertise in conjunction with tangible skills. Analysis and demonstration of specific user correlations as Preetha described might just add up to successful psychometric testing.

With the data they have at their fingertips, Open Study has the potential to track not only teamwork, helpfulness and engagement, as they do now, but also to extend to LEAP / Institute for the Future key skills like problem solving, critical thinking and creativity. I was delighted to witness the immediate engagement of many in the academic session (esp. Kim Bogle – chair of the assessment committee at SNHU, and Mark McQuillan –wonderful – new Dean of School of Ed) discussing how data and metrics might be mined to demonstrate competencies.

It was exciting to sample this “esprit de corps” among true educational entrepreneurs, eager to respond to the genuine needs of students.
Thanks to all who participated.

New (Disrupted) Faculty Roles

Rather than the Sage on the Stage or Guide on the Side, you’re going to see a growing embrace of the Sage on the Side model. The need for an instructor with high-quality, in-depth domain knowledge (The Sage) will never go away. But, in an age of ubiquitous information, he just doesn’t get a stage anymore. However, an age of ubiquitous information also means a lot of that information is going to be crap. An education Sherpa is needed to help students develop information literacy so they can sort the good from the bad.

The above quote comes from a Campus Technology (December 2011) report entitled What’s Hot, What’s Not 2012 http://campustechnology.com/Articles/2011/12/29/2012-Whats-Hot-Whats-Not.aspx?Page=1

As we at the Innovation Lab are looking at alternative models for T+L, we are enthused by initial conversations with Open Study and LOVE Philipp Schmidt and the guys at P2PU. That seems to make us threatening to some of our traditional faculty colleagues who see us as part of the conspiracy to replace them with robots, peer-to-peer non-experts or, to put it another way, to “SurowiekiTM” them out of existence.
We are hoping to work with Open Study on a research project as to how implementation of an alternative means of student support affects a classroom (online or face to face) community and the T+L experience.
Here’s what I think, or hope, or hope I think… It sort of builds on the above CT quote:

  • 80% of questions asked in an online class environment do not require a PhD to answer (that’s a near quote from Carol Twigg – hybrid teaching guru)
  • 80% of questions asked in a class were asked last year (Wayne Mackintosh – OERu guru)

Funny that they both landed on 80% – as the percentage of background, less-than-challenging questions that (perhaps) technology, or someone other than a Full Professor can help with.
Now if I were a FT professor, I could take this one of two ways – and I understand both perspectives:

  1. OH NO – My job is being taken away, how dare “they” how can computers and T.As or mentors replace me with my experience, passion and qualifications OR
  2. GREAT – now I don’t have to answer all those mind-numbingly dull, repetitive questions that I didn’t spend 5,6,7 years getting my PhD for – the ONLY time I will have to step up is when a question merits my expertise!

When answering only the 20%’s (the subject-matter-specific, *interesting* questions)– the faculty member who gets it can now oversee 5 x as many classes and will feel challenged and stretched in her/his discipline rather than in the basics that a computer, or a good generalist T.A / mentor could easily address. If this is built correctly, we enhance and honor the expertise of the expert and free them up to stay at their cutting edge of knowledge, rather than dealing with the “I uploaded the doc to Blackboard but it failed” / “I lost my password” / “my textbook hasn’t arrived yet”… etc.  That an FAQ (per: Wayne) or a “mentor” (per: Carol) covers for them.

One obvious flaw in this argument that could push faculty back to a fear and distrust would be if there weren’t 5 classes for them to oversee – and their load got cut. To that I would say that if we do embrace these new (disruptive) models then we have a shot at engaging the extra millions who need the education but can’t get access due to outdated models and non-scalable costs. No evolution / disaggregation of roles and I feel that traditional faculty roles will be threatened.  Better efficiencies, everyone playing to real strengths and I think we’ll get there.

“Sage on the side” / “Sherpa” – great terms – I wish I’d said that –  although as the saying goes:  Talent borrows, genius steals – so likely I will