Training quest 3 has us exploring “ideals of quality” across two of the largest/highest profile open education initiatives. I hear “quality” and immediately think in terms of comparative worth–excellence along any number of dimensions from durability to fit to taste and texture. While I could easily write a post about OLI’s Modern Biology animations or student argumentation skills in MIT’s Seminar in Ethnography and Fieldwork, discussions of quality as a global characteristic don’t seem particularly fruitful here.
But what if we think instead in terms of the first definition of quality: “an essential or distinctive characteristic, property, or attribute.” Instead of value, then, quality is more about values.
So, what do MIT and OLI value? What do they consider the essential or distinctive characteristics of what they’re trying to do, of who they are as organizations? Continue reading
Posted in Instructional Design, Intro to Open Education, Open Education, Technology
Tags: distance education, evaluation, MIT Opencourseware, online education, online learning, open content, Open Education, Open Learning Initiative, organizational values
…that I could just come up with the titles of articles and they would research, analyze and write themselves. I’m pretty good at the title thing. Less so the rest. Currently in various stages of [in]completion in my word processor:
- “Pipe dreams: What evaluation educators can learn from students’ visions of the ideal evaluation tool”
- “Evaluators by assignment: Truth and consequences of the mass amateurization of evaluation”
- “Openness and the information economy: Market share, value propositions and competitive advantage”
- “Better than free: A capacity-building approach to pro-bono”
- “Leavening the internet: A latter-day-saint guide to new media”
I’ll let you know when they start writing themselves.
Today was parent-teacher conferences. At least as close as we get to parent-teacher conferences here. Started out with a big meeting in the church where we did things like show them the new silverware we finally got around to buying last week [the kids were sharing broken spoons before—not a big deal, really, we share everything around here, but still] and talk about the progress of a couple of alumni and the programs we’ve been involved in. A couple of parents raised some concerns; “my son was sick and missed an exam and is therefore failing a class” to which the response was “Nilsa and Roberto and Juana were all sicker than your son and they’re still at the top of their classes.” And “I live 9 hours away and can’t just visit every month to check on my son, but when I call to see how he’s doing, no one returns my calls.” To which the response was basically “keep calling.” [In fairness, Celsa gave out her cell number to the whole group, but even I can guess the rate at which such phone calls are generally returned]
The biggest frustration, though, came with the “research” the director of the school was so excited about conducting. [He wasn’t here, by the way, apparently he’s doing some consulting for another school in Nicaragua.] I was all excited when they started handing out surveys to the parents…asking what changes [positive and negative] they had observed in their student, whether the school had fulfilled their expectations, how they would rank areas of improvement, what concerns they had about their student in particular, etc.
Anyway, I was all excited until I started looking around and realized that fully two thirds of them COULDN’T READ IT! Aigh! Even the ones who could read it couldn’t understand the university-level research language it was written in. I think it’s fair to say I was bitterly disappointed. What are we saying when we set up a system to only accept feedback from people outside the sector we’re supposedly set up to serve!? There’s just something very wrong here.