“Gym fee” college

In this model tuition is reduced to a level that is more akin to a monthly gymclub membership (which my group at SNHU talked about as a reasonable target) or as their President calls it equivalent to a monthly “cable-fee”.

Here are the pieces that resonate with me from this Chronicle overview:
1) Self-paced, mentor (rather than faculty) led
2) Sophisticated platform – see the bit about him hiring Google developers – that allows a freemium trial period from which user data is gathered
3) Large scale data collection that allows tailoring of content / learning analytics to tailor a student’s experience
4) Integration, or at least similar user experience to, platforms that students already know (Facebook)

I’d love to get under the hood and see how the curriculum is assembled – are key skills being emphasized or is the learning focused on topics and deliverables as with a more traditional model? and are they making use of OER ? – I assume their pricing is not compromised with $200 text book fees…
Their lack of accreditation would seem to be a (current) disadvantage but, as one of my colleagues put it in our early am review: “we don’t want to depend on the accreditation bit for holding back the competition for too long.  Sooner or later; those gates will open..” – credit LR

The full article is titled No Financial Aid, No Problem and is at http://chronicle.com/article/No-Financial-Aid-No-Problem/131329/


Roadmap to a new assessment model

Peter Rawsthorne’s blog has a great overview of suggested developments in assessment for open learning environments (my underlines):

The Roadmap

  1. In the immediate term we should build peer-assessment to utilize and promote the use of badges. This will get us going quickly and then build upon this with greater automation and other assessment approaches.
  2. In the near term we should build rubrics to allow people to perform self-assessments and then submit portfolio for a peer-review. Peer reviewers should also be given badges for executing N peer reviews. This does not require a lot of automation, can be implemented quite quickly.
  3. In the close term we should build formative and summative assessment instruments that can be baked into websites, apps and tools. People should be able to edit / improve on instruments in a community kind of way.
  4. Beginning immediately and through the next year we should start building apps and experimenting with mass collaboration for assessment.
  5. In the medium term we should build a repository of guidelines, templates, assessment instruments and approaches.
  6. In the long term we should build a fun community around assessment and broadening peoples skills and knowledge regarding assessment and accreditation.
  7. As an ongoing initiative we should encourage research and centers of excellence around open assessment and its relationship with open accreditation.


New leadership skills

A different take on essential skills for Future leaders:
Bob Johansen Fellow at the Institute for the Future http://www.iftf.org/ with, if nothing else, some great new made-up words !

Ping Quotient
Excellent responsiveness to other people’s requests for engagement; strong propensity and ability to reach out to others in a network

Seeing a much bigger picture; thinking in terms of higher level systems, bigger networks, longer cycles

Open Authorship
Creating content for public modification; the ability to work with massively multiple contributors

Cooperation Radar
The ability to sense, almost intuitively, who would make the best collaborators on a particular task or mission

Fluency in working and trading simultaneously with different hybrid capitals, e.g., natural, intellectual, social, financial, virtual

The ability to do real-time work in very large groups; a talent for coordinating with many people simultaneously; extreme-scale collaboration

Fearless innovation in rapid, iterative cycles; the ability to lower the costs and increase the speed of failure

Knowing how to be persuasive and tell compelling stories in multiple social media spaces (each space requires a different persuasive strategy and technique)

Signal/Noise Management
Filtering meaningful info, patterns, and commonalities from the massively-multiple streams of data and advice

The ability to prepare for and handle surprising results and complexity that come with coordination, cooperation and collaboration on extreme scales

The Primary Challenge for the OER Movement

David Wiley recently posted an article on the challenge of assessment in the OER world. http://opencontent.org/blog/archives/2042
It certainly does seem to be a challenge – we (SNHU Innovation team) spent time with ETS at their Higher Ed Advisory Council last week in San Diego where we had some great break-out discussions around standardized testing. It was a great session; they have some VERY smart employees in Princeton (special mentions for Ross, Patrick, Kate and David) and they convened a very interesting group of academics.

The current assessment choice for those of us working in the OER space seems to be:

  • on one hand, multiple choice / self-checks – with no concrete feedback from humans (many OER courses have these included)
  • on the other –  blog / journal reviews which are time-consuming (hence questionable given scaling aspirations),  subjective, organization-specific, open to inflation, bias and inconsistent leveling.

I appreciated the Vision Project’s* working group March 2011 report on Student Learning Outcomes and Assessment which I think frames the issue very clearly (the bold is my highlight)

If colleges and universities … are to devise a system for assessing the learning outcomes of their undergraduates in a way that allows for comparability, transparency, and accountability, we must agree on some of the qualities of an undergraduate education that we all expect our students to possess. At the same time, those qualities we agree on must allow us to preserve the unique missions of individual colleges, appropriate measures of privacy around assessment work, and an ability to actually improve teaching and learning with the results that we find.
Research and literature on sound assessment practice is clear that no single instrument or approach to assessing learning can meet all of the challenges and notes that the most effective systemic models of assessment offer multiple measures of student learning in a triangulation approach. The most effective systemic models of assessment offer multiple measures of student learning in a “triangulation” approach that includes indirect assessments such as surveys, direct assessments like tests, and embedded assessments such as classroom assignments.

This notion of triangulation seems viable – mixing in institutional (mission-related) emphases, with quick turnaround self-checks. The missing (third) element is the industry-standard independent test. In some disciplines – Project Management (PMI), IT (Microsoft), HR (PHR, SPHR) there are clear standards that can be applied. There is certainly a window of opportunity for someone like ETS to take a lead on this, if they can develop the adaptability of development and pricing that we, and other college partners, would likely need. I hope that as colleges free up Instructional Design time that they would typically have been spending making *another* version of PSYCH101 content (which is freely available and wonderful at Saylor.org), they spend more time on developing key benchmarks for assessment that can become more widely disseminated.
Assessment is indeed the golden bullet for this work. Ideally it’s fun too.

* The Working Group on Student Learning Outcomes and Assessment (WGSLOA) was established by Richard Freeland, Commissioner of Higher Education, in late fall 2009 in anticipation of the Vision Project, a bold effort to set a public agenda for higher education and commit public campuses and the Department/Board of Higher Education to producing nationally leading educational results at a time when the need for well-educated citizens and a well- prepared workforce is critical for the future of the Commonwealth.