Category Archives: edtech

Thoughts on: “Digital is not the future – Hacking the institution from the inside” – how education support staff can bring change

One of the best things about my research and being part of a community of practice of education technologists/designers is that there are a lot of overlaps. While I’d hate to jump the gun, I think it’s pretty safe to say that harnessing the group wisdom of this community is going to be a core recommendation when it comes to looking for ways to better support Tech Enhanced Learning and Teaching practices in higher education.

So why not start now?

There’s a lively and active community of Ed Techs online (as you’d hope) and arguably we’re bad at our jobs if we’re not part of it. I saw an online “hack” event mentioned a couple of weeks ago – the “Digital is not the future – Hacking the institution from the inside” discussion/workshop/something and joined up.

hack-poster-3

There’s a homepage (that I only just discovered) and a Twitter hashtag #futurehappens (that I also wish I’d noticed) and then a Loomio discussion forum thing that most of the online action has been occurring in.

Just a side note on Loomio as a tool – some of the advanced functionality  (voting stuff) seems promising but the basics are a little poor. Following a discussion thread is quite difficult when you get separate displays for new posts that only include most of the topic heading and don’t preview the new response. (Either on screen or in the email digest). Biographical information about participants was also scant.

All the same, the discussions muddled along and there were some very interesting and thoughtful chats about a range of issues that I’ll hopefully do justice to here.

It’s a joint event organised by the London School of Economics (LSE) and the University of Arts, London (UAL) but open to all. Unsurprisingly then, most participants seem to be from the UK, so discussions were a little staggered. There was also an f2f event that generated a sudden surge of slightly decontextualised posts but there was still quite of bit of interest to come from that (for an outsider)

The “hack” – I have to use inverted commas because I feel silly using the term with a straight face but all power to the organisers and it’s their baby – was intended to examine the common issues Higher Ed institutions face in innovating teaching and learning practices, with a specific focus on technology.

The guiding principles are:

Rule 1: We are teaching and learning focused *and* institutionally committed
Rule 2: What we talk about here is institutionally/nationally agnostic
Rule 3: You are in the room with the decision makers. What we decide is critical to the future of our institutions. You are the institution
Rule 4: Despite the chatter, all the tech ‘works’ – the digital is here, we are digital institutions. Digital is not the innovation.
Rule 5: We are here to build not smash
Rule 6: You moan (rehearse systemic reasons why you can’t effect change – see Rule 3), you get no beer (wine, juice, love, peace, etc)

We have chosen 5 common scenarios which are often the catalyst for change in institutions. As we noted above, you are in the room with the new VC and you have 100 words in each of the scenarios below to effectively position what we do as a core part of the institution. Why is this going to make our institutional more successful/deliver the objectives/save my (the VCs) job? How do we demonstrate what we do will position the organisation effectively? How do we make sure we stay in the conversation and not be relegated to simply providing services aligned with other people’s strategies?

The scenarios on offer are below – they seemed to fall by the wayside fairly quickly as the conversation evolved but they did spark a range of different discussions.

Scenario 1
Strategic review of the institution and budget planning for 2020
Scenario 2
Institutional restructure because of a new VC
Scenario 3
Undertaking of an institution wide pedagogical redesign
Scenario 4
Responding to and implementing TEF
Scenario 5
Institutional response to poor NSS/student experience results

(It was assumed knowledge that TEF is the upcoming UK govt Teaching Excellence Framework – new quality standards – and the NSS is the National Student Survey – student satisfaction feedback largely.)

Discussions centred around what we as Ed. Designers/Techs need to do to “change the discourse and empower people like us to actively shape teaching and learning at our institutions”. Apart from the ubiquitous “more time and more money” issue that HE executives hear from everyone, several common themes emerged across the scenarios and other posts. Thoughts about university culture, our role as experts and technology consistently came to the fore. Within these could be found specific issues that need to be addressed and a number of practical (and impractical) solutions that are either in train or worth considering.

On top of this, I found a few questions worthy of further investigation as well as broader topics to pursue in my own PhD research.

I’m going to split this into a few posts because there was a lot of ground covered. This post will look at some of the common issues that were identified and the from there I will expand on some of the practical solutions that are being implemented or considered, additional questions that this event raised for me and a few other random ideas that it sparked.

Issues identified

There was broad consensus that we are in the midst of a period of potentially profound change in education due to the affordances offered by ICT and society’s evolving relationship with information and knowledge creation. As Education designers/technologists/consultants, many of us sit fairly low in the university decision making scheme of things but our day to day contact with lecturers and students (and emerging pedagogy and technology) give us a unique perspective on how things are and how they might/should be.

Ed Tech is being taken up in universities but we commonly see it used to replicate or at best augment long-standing practices in teaching and learning. Maybe this is an acceptable use but it is often a source of frustration to our “tribe” when we see opportunities to do so much more in evolving pedagogy.

Peter Bryant described it as “The problem of potential. The problem of resistance and acceptance” and went on to ask “what are the key messages, tools and strategies that put the digital in the heart of the conversation and not as a freak show, an uncritical duplication of institutional norms or a fringe activity of the tech savvy?”

So the most pressing issue – and the purpose of the hack itself – is what we can do (and how) to support and drive the change needed in our institutions. Change in our approach to the use of technology enhanced learning and teaching practices and perhaps even to our pedagogy itself.

Others disagreed that a pedagogical shift was always the answer. “I’m not sure what is broken about university teaching that needs fixing by improved pedagogy… however the economy, therefore the job market is broken I think… when I think about what my tools can do to support that situation, the answers feel different to the pedagogical lens” (Amber Thomas)

The very nature of how we define pedagogy arose tangentially a number of times – is it purely about practices related to the cognitive and knowledge building aspects of teaching and learning or might we extend it to include the ‘job’ of being a student or teacher? The logistical aspects of studying – accessing data, communication, administrivia and the other things that technology can be particularly helpful in making more convenient. I noted the recent OLT paper – What works and why? – that found that students and teachers valued these kinds of tools highly. Even if these activities aren’t the text-book definition of pedagogy, they are a key aspect of teaching and learning support and I’d argue that we should give them equal consideration.

Several other big-picture issues were identified – none with simple solutions but all things that must be taken into consideration if we hope to be a part of meaningful change.

  • The sheer practicality of institution wide change, with the many different needs of the various disciplines necessitates some customised solutions.
  • The purpose of universities and university education – tensions between the role of the university in facilitating research to extend human knowledge and the desire of many students to gain the skills and knowledge necessary for a career.
  • The very nature of teaching and learning work can and is changing – this needs to be acknowledged and accommodated. New skills are needed to create content and experiences and to make effective use of the host of new tools on offer. Students have changing expectations of access to information and their lecturers’ time, created by the reality of our networked world. These are particularly pointy issues when we consider the rise of casualisation in the employment of tutors and lecturers and the limits on the work that they are paid to do.

Three key themes emerged consistently across all of the discussion posts in terms of how we education support types (we really do need a better umbrella term) can be successful in the process of helping to drive change and innovation in our institutions. Institutional culture, our role as “experts” and technology. I’ll look at culture and expertise for now.

Culture

It was pretty well universally acknowledged that, more than policy or procedure or resourcing, the culture of the institution is at the heart of any successful innovations. Culture itself was a fairly broad set of topics though and these ranged across traditional/entrenched teaching and learning practices, how risk-averse or flexible/adaptable an institution is, how hierarchical it is and to what extent the ‘higher-ups’ are willing to listen to those on the ground, willingness to test assumptions, aspirational goals and strategy and change fatigue.

Some of the ideas and questions to emerge included:

  • How do we change the discourse from service (or tech support) to pedagogy?
  • “The real issue is that money, trust, support, connectedness and strategy all emanate from the top” (Peter Bryant)
  • “the prerequisite for change is an organisational culture that is discursive, open and honest. And there needs to be consensus about the centrality of user experience.” (Rainer Usselmann)
  • “We need to review our decision making models and empower agility through more experimentation” (Silke Lange) – My take on this – probably true but perhaps not the right thing to say to the executive presumably, with the implication that they’re currently making poor decisions. Perhaps we can frame this more in terms of a commitment to continuous improvement, then we might be able to remove the sense of judgement about current practices and decision makers? 
  • “we will reduce the gap between the VC and our students… the VC will engage with students in the design of the organisation so it reflects their needs. This can filter down to encourage students and staff to co-design courses and structures, with two way communication” (Steve Rowett)
  • “In the private (start-up) sector, change is all you know. Iterate, pivot or perservere before you run out of money.That is the ‘Lean Start-up’ mantra… create a culture and climate where it is expected and ingrained behaviour then you constantly test assumptions and hypotheses” (Rainer Usselman)
  • “Theoretical and practical evidence is important for creating rationale and narratives to justify the strategy” (Peter Bryant) – I agree, we need to use a research-led approach to speak to senior academic execs

While change and continuous improvement is important, in many places it has come to be almost synonymous with management. And not always good management – sometimes just management for the sake of appearing to be doing things. It’s also not just about internal management – changes in government and government policy or discipline practices can all necessitate change.

One poster described how change fatigued lecturers came to respond to an ongoing stream of innovations relating to teaching and learning practice coming down from on-high.

I don’t think anyone will be surprised to hear that staff got very good at re-describing their existing, successful practice in whatever the language of the week was.

Culture is arguably the hardest thing to change in an organisation because there are so many different perspectives of it held by so many different stakeholders. The culture is essentially the philosophy of the place and it will shape the kinds of policy that determine willingness to accept risk, open communication, transparency and reflection – some of the things needed to truly cultivate change.

Experts

Our status (as education designers/technologists/support with expertise) in the institution and the extent to which we are listened to (and heard) was seen as another major factor in our ability to support and drive innovation.

There were two key debates in this theme: how we define/describe ourselves and what we do and how we should best work with academics and the university.

Several people noted the difference between education designers and education technologists.

“Educational developers cannot be ignorant of educational technologies any more than Learning Technologists can be ignorant of basic HE issues (feedback, assessment, teaching practices, course design, curriculum development etc).”

Perhaps it says something about my personal philosophy or the fact that I’ve always worked in small multi-skilled teams but the idea that one might be able to responsibly recommend and support tools without an understanding of teaching and learning practice seems unthinkable. This was part of a much larger discussion of whether we should even be talking in terms of eLearning any more or just trying to normalise it so that it is all just teaching and learning. Valid points were made on both sides

“Any organisational distinction between Learning & Teaching and eLearning / Learning Technology is monstrous. Our goal should be to make eLearning so ubiquitous that as a word it becomes obsolete.” (SonJa Grussendorf)

” I also think that naming something, creating a new category, serves a political end, making it visible in a way that it might not be otherwise.” (Martin Oliver)

“it delineates a work area, an approach, a mindset even…Learning technology is not a separate, secondary consideration, or it shouldn’t be, or most strongly: it shouldn’t be optional.” (Sonja Grussendorf)

There was also an interesting point made that virtually nobody talks about e-commerce any more, it’s just a normal (and sometimes expected) way of doing business now.

For me, the most important thing is the perception of the people that I am working with directly – the lecturers. While there is a core of early adopters and envelope pushers who like to know that they have someone that speaks their language and will entertain their more “out-there” ideas when it comes to ed tech and teaching practices, many more just want to go about the business of teaching with the new learning tools that we have available.

As the Education Innovation Office or as a learning technologist, I’m kind of the helpdesk guy for the LMS and other things. Neither of these sets of people necessarily think of me in terms of broader educational design (that might or might not be enhanced with Ed Tech). So I’ve been thinking a lot lately about rebranding to something more like Education Support or Education Excellence. (I’ve heard whispers of a Teaching Eminence project in the wind – though I haven’t yet been involved in any discussions)

But here’s the kicker – education technology and innovation isn’t just about better teaching and learning. For the college or the university, it’s an opportunity to demonstrate that yes, we are keeping up with the Joneses, we are progressive, we are thought leaders. We do have projects that can be used to excite our prospective students, industry partners, alumni, government and benefactors. So on this level, keeping  “innovation” or  “technology” in a title is far more important. So, pragmatically, if it can help me to further the work that I’m doing by being connected to the “exciting” parts of university activity, it would seem crazy not to.

There was some debate about whether our role is to push, lead or guide new practice. I think this was largely centred on differences of opinion on approaches to working with academics.  As I’ve mentioned, my personal approach is about understanding their specific teaching needs (removing technology from the conversation where possible) and recommending the best solutions (tech and pedagogic). Other people felt that as the local “experts”, we have a responsibility to push new innovations for the good for the organisation.

“Personally I’m not shy about having at least some expertise and if our places of work claim to be educational institutions then we have a right to attempt to make that the case… It’s part of our responsibility to challenge expectations and improve practices as well” (David White)

“we should pitch ‘exposure therapy’ – come up with a whole programme that immerses teaching staff in educational technology, deny them the choice of “I want to do it the old fashioned way” so that they will realise the potential that technologies can have in changing not only, say, the speed and convenience of delivery of materials (dropbox model), but can actually change their teaching practice.” (Sonja Grussendorf)

I do have a handful of personal Ed tech hobby horses that the College hasn’t yet taken an interest in (digital badges, ePortfolios, gamification) that I have advocated with minimal success and I have to concede that I think this is largely because I didn’t link these effectively enough to teaching and learning needs. Other participants held similar views.

Don’t force people to use the lift – convince them of the advantages, or better still let their colleagues convince them. (Andrew Dixon)

A final thought that these discussions triggered in me – though I didn’t particularly raise it on the board – came from the elephant in the room that while we might be the “experts” – or at least have expertise worth heeding – we are here having this discussion because we feel that our advice isn’t always listened to.

Is this possibly due to an academic / professional staff divide? Universities exist for research and teaching and these are the things most highly valued – understandably. Nobody will deny that a university couldn’t function in the day to day without the work of professional staff but perhaps there is a hierarchy/class system in which some opinions inherently carry less weight. Not all opinions, of course – academics will gladly accept the advice of the HR person about their leave allocation or the IT person in setting up their computer – but when we move into the territory of teaching and learning, scholarship if you like, perhaps there is an almost unconscious disconnect occurring.

I don’t say this with any particular judgement and I don’t even know if it’s true – my PhD supervisor was aghast when I suggested it to her – but everyone has unconscious biases and maybe this is one of them.

Just a thought anyway but I’d really like to hear your thoughts on this or anything else that I’ve covered in this post.

Next time, on Screenface.net – the role of technology in shaping change and some practical ideas for action.

 

 

Thoughts on: “What works and why?” OLT Project 2016

The Office for Learning and Teaching (OLT) is – now was – an Australian government body intended to support best practice in enhancing teaching and learning in the Higher Education sector.

It funds a number of research projects, which in 2013 included “What works and why? Understanding successful technology enhanced learning within institutional contexts” – driven by Monash University in Victoria and Griffith University in Queensland and led by Neil Selwyn.

The final report for this project has now been published.
They also have a project website running at http://bit.ly/TELwhatworksandwhy

Rather than focussing on the “state of the art”, the project focuses on the “state of the actual” – the current implementations of TELT practices in universities that are having some measure of success. It might not be the most inspiring list (more on that shortly) but it is valuable to have a snapshot of where we are, what educators and students value and the key issues that the executive face (or think they face) in pursuing further innovation.

The report identifies 13 conditions for successful use of Tech Enhanced Learning in the institution and with teachers and students. (Strictly speaking, they call it technology enabled learning, which grates with me far more than I might’ve expected – yes, it’s ultimately semantics but for me the implication is that the learning couldn’t occur without the tech and that seems untrue. So because this is my blog, I’m going to take the liberty of using enhanced)

ed tech conditions for success graphic

The authors took a measured approach to the research, beginning with a large scale survey of teacher and student attitudes toward TEL which offered a set of data that informed questions in a number of focus groups. This then helped to identify a set of 10 instances of “promising practice” at the two participating universities that were explored in case studies. The final phase involved interviewing senior management at the 39 Australian universities to get feedback on the practicality of implementing/realising the conditions of success.

So far, so good. The authors make the point that the majority of research in the TELT field relates to more cutting edge uses in relatively specific cohorts and while this can be enlightening and exciting, it can overlook the practical realities of implementing these at scale within a university learning ecosystem. As a learning technologist, this is where I live.

What did they discover?

The most prominent ways in which digital technologies were perceived as ‘working’ for students related to the logistics of university study. These practices and activities included:

  • Organising schedules and fulfilling course requirements;
  • Time management and time-saving; and
  • Being able to engage with university studies on a ‘remote’ and/or mobile basis

One of the most prominent learning-related practices directly to learning was using digital technologies to ‘research information’; ‘Reviewing, replaying and revising’ digital learning content (most notably accessing lecture materials and recordings) was also reported at relatively high levels.

Why technologies ‘work’ – staff perspectives

Most frequently nominated ways in which staff perceived digital technologies were ‘working’ related to the logistics of university teaching and learning. These included being able to co-ordinate students, resources and interactions in one centralised place. This reveals a frequently encountered ‘reality’ of digital technologies in this project: technologies are currently perceived by staff and students to have a large, if not primary role to enable the act of being a teacher or student, rather than enabling the learning.

Nevertheless, the staff survey did demonstrate that technologies were valued as a way to provide support learning, including delivering instructional content and information to students in accessible and differentiated forms. This was seen to support ‘visual’ learning, and benefit students who wanted to access content at different times and/or different places.

So in broad terms, I’d suggest that technology in higher ed is seen pretty much exactly the same way we treat most technology – it doesn’t change our lives so much as help us to live them.

To extrapolate from that then, when we do want to implement new tools and ways of learning and teaching with technology, it is vital to make it clear to students and teachers exactly how they will benefit from it as part of the process of getting them on board. We can mandate the use of tools and people will grumblingly accept it but it is only when they value it that they will use it willingly and look for ways to improve their activities (and the tool itself).

The next phase of the research, looking at identified examples of ‘promising practice” to develop the “conditions for success” is a logical progression but looking at some of the practices used, it feels like the project was aiming too low. (And I appreciate that it is a low-hanging-fruit / quick-wins kind of project and people in my sphere are by our natures more excited by the next big thing but all the same, if we’re going to be satisfied with the bare minimum, will that stunt our growth?) . In fairness, the report explicitly says “the cases were not chosen according to the most ‘interesting’, ‘innovative’ or ‘cutting-edge’ examples of technology use, but rather were chosen to demonstrate sustainable examples of TEL”

Some of the practices identified are things that I’ve gradually been pushing in my own university so naturally I think they’re top shelf innovations 🙂 – things like live polling in lectures, flipping the classroom, 3D printing and virtual simulations. Others however included the use of online forums, providing videos as supplementary material and using “online learning tools” – aka an LMS. For the final three, I’m not sure how they aren’t just considering standard parts of teaching and learning rather than something promising. (But really, it’s a small quibble I guess and I’ll move on)

The third stage asked senior management to rank the usefulness of the conditions of success that were identified from the case studies and to comment on how soon their universities would likely be in a position to demonstrate them. The authors seemed surprised by some of the responses – notably to the resistance to the idea of taking “permissive approaches to configuring systems and choosing software”. As someone “on the ground” that bumps into these kinds of questions on daily basis, this is where it became clear to me that the researchers have still been looking at this issue from a distance and with a slightly more theoretical mindset. There is no clear indication anywhere in this paper that they discussed this research with professional staff (i.e. education designers or learning technologists) who are often at the nexus of all of these kinds of issues. Trying to filter out my ‘professional hurt feelings’, it still seems a lot like a missed opportunity.

No, wait, I did just notice in the recommendations that “central university agencies” could take more responsibility for encouraging a more positive culture related to TEL among teachers.

Yup.

Moving on, I scribbled a bunch of random notes and thoughts over this report as I read it (active reading) and I might just share these in no particular order.

  • Educators is a good word. (I’m currently struggling with teachers vs lecturers vs academics)
  • How do we define how technologies are being used “successfully and effectively”?
  • Ed Tech largely being used to enrich rather than change
  • Condition of success 7″the uses of digital technology fit with familiar ways of teaching” – scaffolded teaching
  • condition of success 10 “educators create digital content fit for different modes of consumption” – great but it’s still an extra workload and skill set
  • dominant institutional concerns include “satisfying a perceived need for innovation that precludes more obvious or familiar ways of engaging in TEL” – no idea how we get around the need for ‘visionaries’ at the top of the tree to have big announceables that seemingly come from nowhere. Give me a good listener any day.
  • for learners to succeed with ed tech they need better digital skills (anyone who mentions digital natives automatically loses 10 points) – how do we embed this? What are the rates of voluntary uptake of existing study skills training?
  • We need to normalise new practices but innovators/early adopters should still be rewarded and recognised
  • it’s funny how quickly ed tech papers date – excitement about podcasts (which still have a place) makes this feel ancient
  • How can we best sell new practices and ideas to academics and executive? Showcases or 5 min, magazine show style video clips (like Beyond 2000 – oh I’m so old)
  • Stats about which tools students find useful – data is frustratingly simple. Highest rating tool is “supplementing lectures, tutorials, practicals and labs” with “additional resources” at 42% (So 58% don’t find useful? – hardly a ringing endorsement
  • Tools that students were polled about were all online tools – except e-books. Where do offline tools sit?
  • Why are students so much more comfortable using Facebook for communication and collaboration than the LMS?
  • 60% of students still using shared/provided computers over BYOD. (Be interesting to see what the figure is now)
  • Promising practice – “Illustrating the problem: digital annotation  tools in large classes” – vs writing on the board?
  • conditions for success don’t acknowledge policy or legal compliance issues (privacy, IP and copyright)
  • conditions for success assume students are digitally literate
  • there’s nothing in here about training
  • unis are ok with failure in research but not teaching
  • calling practices “innovations signals them as non-standard or exceptions” – good point. Easier to ignore them
  • nothing in here about whether technology is fit for purpose

Ultimately I got a lot out of this report and will use it to spark further discussion in my own work. I think there are definitely gaps and this is great for me because it offers some direction for my own research – most particularly in the role of educational support staff and factors beyond the institution/educator/student that come into play.

Update: 18/4/16 – Dr Michael Henderson of Monash got in touch to thank me for the in-depth look at the report and to also clarify that “we did indeed interview and survey teaching staff and professional staff, including faculty based and central educational / instructional designers”

Which kind of makes sense in a study of this scale – certainly easy enough to pare back elements when you’re trying to create a compelling narrative in a final report I’m sure.

Starting a PhD

Photo 1-03-2016, 5 21 45 PM

This is me, today, Tuesday the 1st of March 2016. This is the day that I officially start my PhD studies (is it studies or research?) with the Faculty of Education and Social Work at the University of Sydney.

Surprisingly enough, the exact topic is a work in progress but broadly I will be looking into Technology Enhanced Learning and Teaching (TELT) Practices  in Higher Education, the factors that influence it and ways to better support it. My supervisor is Peter Goodyear and my associate supervisor is Lina Markauskaite, both decent seeming people that have done a lot of respected work in this and related areas.

So why am I doing it?

This is the make-or-break question I suspect. The thing that will ultimately determine whether or not I finish. Happily I think my reasons are solid.

I want to know more about this field and I want to be better at my job as a learning technologist. (I used to mock the pretension of that title but it’s grown on me). I don’t necessarily aspire to a job in academia but I do think that this will help me professionally whichever path I do end up taking.

I see the questions that I have around this field as a puzzle and one which deserves to be solved. I think that technology can be better employed in adult education to create deeper and more meaningful learning experiences for students and it disappoints me that I don’t see this happening more regularly. I’d like to better understand what factors shape TELT practices in higher education and see what can be done to better support it.

I’m grateful for the opportunity that I’ve been given in being taken on as a student. I haven’t followed the more conventional academic path to get here in terms of research based study and there is certainly some catching up to do but this just makes me more determined to succeed.

The word “scholar” was mentioned a few times last week when I attended the HDR (Higher Degree by Research) induction session and while for some reason it evokes images of 12th Century monks painstakingly writing on parchment by candlelight in a dim cell, it feels special to be a (tiny) part of this history.

I should probably go read something now. (Though surely I’ve earned a break – see, proud scholar already)

Week 4 of the 11.133x MOOC – Bringing it on home

The final week (ok 2 weeks) of the MITx – Implementation and Evaluation of Educational Technology MOOC – is now done and dusted and it’s time for that slight feeling of “what do I do now?” to kick in.

This final section focused on sound evaluation processes – both formative and summative – during and after your ed tech implementation. This whole MOOC has had a very smooth, organic kind of flow and this brought it to a very comfortable conclusion.

Ilona Holland shared some particularly useful ideas about areas to emphasise in the evaluation stage: appeal (engagement), interest (sparking a desire to go further), comprehension, pace and usability. She and David Reider clarified the difference between evaluation and research – largely that in an evaluation you go in without a hypothesis and just note what you are seeing.

In keeping with the rest of these posts, I’ll add the assignment work that I did for this final unit as well as my overall reflections. Spoiler alert though, if you work with educational technology (and I assume you do if you are reading this blog), this is one of the best online courses that I’ve ever done and I highly recommend signing up for the next one.


 

Assessment 4 – Evaluation process.

  1. Decide why you are evaluating. Is it just to determine if your intervention is improving learner’s skills and/or performance? Is it because certain stakeholders require you to?

We will evaluate this project because it is an important part of the process of implementing any educational technology. We need to be confident that this project is worth proceeding with at a larger scale. It will also provide supporting evidence to use when approaching other colleges in the university to share the cost of a site-wide license.

  1. Tell us about your vision of success for the implementation. This step is useful for purposes of the course. Be specific. Instead of saying “All students will now be experts at quadratic equations,” consider if you would like to see a certain percentage of students be able to move more quickly through material or successfully complete more challenging problems.

Our goal in using PollEverywhere in lectures is to increase student engagement and understanding and to reduce the number of questions that students need to ask the lecturer after the lecture.

A secondary goal would be to increase the number of students attending lectures.

Engagement seems like a difficult thing to quantify but we could aim for a 10% increase in average student grades in assessments based on lecture content. We could also aim for lecturers receiving 10% fewer student questions during the week about lecture content. A 20% increase in attendance also would be a success.

  1. Generate questions that will guide the evaluation. What do you need and want to know regarding the efficacy of your implementation? Are there questions that other stakeholders care about that should also be included? Think about your desired goals and outcomes for the implementation.

Questions for students:

I find lectures engaging
I am more likely to attend lectures now because of the use of PollEverywhere
I find PollEverywhere easy to use
PollEverywhere works reliably for me
The use of PollEverywhere feedback in lectures has helped deepen my understanding of the content

Questions for lecturers:

I have found PollEverywhere easy to use
PollEverywhere works reliably for me in lectures
PollEverywhere has helped me evaluate and adjust my lectures
Fewer students ask me questions between lectures since I started using PollEverywhere
Students seem more engaged now

  1. Determine what data and information you need to address the questions and how you will collect it.This could be qualitative or quantitative. You might consider observing teachers and students in action or conducting surveys and interviews. You might look at test performance, participation rates, completion rates, etc. It will depend on what is appropriate for your context.

Pre-use survey of students relating to engagement in lectures and their attitudes towards lectures
Observation of classes using PollEverywhere in lectures and student activity/engagement
Lecture attendance numbers?
Use of PollEverywhere near the end of lectures to gather student feedback
Comparison of assessment grade averages
Feedback from students in tutorials
University SELS (Student Experience of Learning Support) and SET (Student Experience of Teaching) surveys
Data derived directly from Poll results

  1. Consider how you would communicate results and explain if certain results would cause you to modify the implementation. In a real evaluation, you would analyze information and draw conclusions. Since your course project is a plan, we will skip to this step.

The quantitative data (changes in grades, results from polls in lectures, student surveys, attendance estimates) could be collated and presented in a report for circulation around the college. We could also make a presentation at our annual teaching and learning day – which could incorporate use of the tool.

Qualitative data could be built into case studies and a guide to the practical use of the tool.

Evidence emerging during the trial period could be acted on quickly by discussing alternatives with the pilot group and making changes to the way that the tool is used. This might include changing the phrasing of questions, requesting that students with twitter access use this option for responding to the poll or exploring alternative methods of displaying the PollEverywhere results (if PowerPoint is problematic)

Part 5: Reflection

What was difficult about creating your plan? What was easy?

Generally speaking, coming up with the plan overall was a fairly painless experience. The most complicated part was developing tools to identify and evaluate the most appropriate options. This was because the guest speakers gave me so many ideas that it took a while to frame them in a way that made sense to me and which offered a comprehensive process to work through. (This ended up being 3-4 separate documents but I’m fairly happy with all of them as a starting point).

As with all of the activities, once I had discovered the approach that worked for me and was able to see how everyone else was approaching the question, things seemed to fall into place fairly smoothly.

What parts of the course were the most helpful? Why? Did you find certain course materials to be especially useful?

I think I have a fairly process oriented way of thinking – I like seeing how things fit together and how they relate to the things that come before and after. So the sections that dug down into the detail of processes – section 2 and section 4 with the evaluation plans – appealed the most to me.

I can understand the majority of people working with education technology are in the K-12 area and so it makes sense this is where many of the guest experts came from but this did sometimes seem slightly removed from my own experiences. I had to do a certain amount of “translating” of ideas to spark my own ideas.

What about peer feedback? How did your experiences in the 11.133x learning community help shape your project?

Peer feedback was particularly rewarding. A few people were able to help me think about things in new ways and many were just very encouraging. I really enjoyed being able to share my ideas with other people about their projects as well and to see a range of different approaches to this work.

General observations

I’ve started (and quit) a few MOOCs now and this was easily the most rewarding. No doubt partially because it has direct relevance to my existing work and because I was able to apply it in a meaningful way to an actual work task that happened to come up at the same time.

I had certain expectations of how my project was going to go and I was pleased that I ended up heading in a different direction as a result of the work that we did. This work has also helped equip me with the skills and knowledge that I need to explain to a teacher why their preferred option isn’t the best one – and provide a more feasible alternative.

While it may not necessarily work for your EDx stats, I also appreciated the fact that this was a relatively intimate MOOC – it made dealing with the forum posts feel manageable. (I’ve been in MOOCs where the first time you log in you can see 100+ pages of Intro posts and this just seems insurmountable). It felt more like a community.

I liked the idea of the interest groups in the forum (and the working groups) but their purpose seemed unclear (beyond broad ideals of communities of practice) and after a short time I stopped visiting. (I also have a personal preference for individual rather than group work, so that was no doubt a part of this)

I also stopped watching the videos after a while and just read the transcripts as this was much faster. I’d think about shorter, more tightly edited videos – or perhaps shorter videos for conceptual essentials mixed with more conversational case-study videos (marked optional)

Most of the events didn’t really suit my timezone (Eastern Australia) but I liked that they were happening. The final hangout did work for me but I hadn’t had a chance to work on the relevant topic and was also a little caught up with work at the time.

All in all though, great work MOOC team and thanks.

(I also really appreciated having some of my posts highlighted – it’s a real motivator)

Week 3 of the 11.133x MOOC – On fire

In comparison to last week, I ploughed through this weeks’ work. Probably because I have a separate education tech training event on this week – the ACODE Learning Technology Teaching Institute – and wanted to clear the decks to give this due focus.

This week in MOOC land saw us looking at the evaluation phase of a project. Once more, the videos and examples were a little (ok entirely) focussed on K-12 examples (and a detour into ed tech startups that was slightly underwhelming) but there were enough points of interest to keep my attention.

I even dipped back into GoAnimate for one of the learning activities, which was fun.

The primary focus was on considering barriers to success before starting an implementation and it did introduce me to a very solid tool created by Jennifer Groff of MIT that provides a map of potential barriers across 6 areas. Highly recommend checking it out.

Groff, Jennifer, and Chrystalla Mouza. 2008. “A Framework for Addressing Challenges to Classroom Technology Use.” AACE Journal 16 (1): 21-46.

As for my contributions, as ever, I’ll add them now.


 

Barriers and Opportunities for using PollEverywhere in uni lectures. 

There are a number of potential barriers to this success of this project, however I believe that several of them have been considered and hopefully addressed in the evaluation process that led us to our choice of PollEverywhere. One of them only came up on Friday though and it will be interesting to see how it plays out.

1 School – Commercial Relationships

Last week I learnt that a manager in the school that has been interested in this project (my college is made up of four schools) has been speaking to a vendor and has arranged for them to come and make a presentation about their student/lecture response tool. (Pearson – Learning Catalytics). Interestingly this wasn’t a tool on my radar in the evaluation process – it didn’t come up in research at all. A brief look at the specs for the tool (without testing) indicates though that it doesn’t meet several of our key needs.

I believe that we may be talking to this vendor about some of their other products but I’m not sure what significance this has in our consideration of this specific product. The best thing that I can do is to put the new product through the same evaluation process as the one that I have selected and make the case based on selection criteria. We have also purchased a license for PollEverywhere for trialling, so this project will proceed anyway. We may just need to focus on a pilot group from other schools.

2 School – Resistance to centralisation

Another potential obstacle may come from one of our more fiercely independent schools. They have a very strong belief in maintaining their autonomy and “academic freedom” and have historically resisted ideas from the college.

There isn’t a lot that can be done about this other than inviting them to participate and showcasing the results after the piloting phase is complete.

3 School – network limitations

This is unfortunately not something that we can really prepare for. We don’t know how well our wireless network will support 300+ students trying to access a site simultaneously. This was a key factor in the decision to use a tool that enables students to participate via SMS/text, website/app and Twitter.

We will ask lecturers to encourage students to used varied submission options. If the tool proves successful, we could potentially upgrade the wireless access points.

4 Teacher – Technical ability to use the tool

While I tried to select a tool that appears to be quite user-friendly, there are still aspects of it that could be confusing. In the pilot phase, I will develop detailed how-to resources (both video and print) and provide practical training to lecturers before they use the tools.

5 Teacher – Technical

PollEverywhere offers a plug-in that enables lecturers to display polls directly in their PowerPoint slides. Lecturers don’t have permission to install software on their computers, so I will work with our I.T team to ensure that this is made available.

6 Teacher – Pedagogy

Poorly worded or times questions could reduce student engagement. During the training phase of the pilot program, I will discuss the approach that the teacher intends to take in their questions. (E.g. consider asking – did I explain that clearly VS do you understand that)

Opportunities

Beyond the obvious opportunity to enhance student engagement in lectures, I can see a few other potential benefits to this project.

Raise the profile of Educational technology

A successful implementation of a tool that meshes well with existing practice will show that change can be beneficial, incremental and manageable.

Open discussion of current practices

Providing solid evidence of improvements in practices may offer a jumping off point for wider discussion of other ways to enhance student engagement and interaction.

Showcase and share innovative practices with other colleges

A successful implementation could lead to greater collegiality by providing opportunities to share new approaches with colleagues in other parts of the university.

Timeline

This isn’t incredibly detailed yet but is the direction I am looking at. (Issues in brackets)

    Develop how-to resources for both students and lecturers(3)
    Identify pilot participants (1,2)
    Train / support participants (3,4,6)
    Live testing in lectures (5)
    Gather feedback and refine
    Present results to college
    Extend pilot (repeat cycle)
    Share with university

 

Oh and here is the GoAnimate video. (Don’t judge me)

http://goanimate.com/videos/0e8QxnJgGKf0?utm_source=linkshare&utm_medium=linkshare&utm_campaign=usercontent

Week 2 of the 11.133x MOOC – getting things done. (Gradually)

The second week (well fortnight really) of the 11.133x MOOC moved us on to developing some resources that will help us to evaluate education technology and make a selection.

Because I’m applying this to an actual project (two birds with one stone and all) at work, it took me a little longer than I’d hoped but I’m still keeping up. It was actually fairly enlightening because the tool that I had assumed we would end up using wasn’t the one that was shown to be the most appropriate for our needs. I was also able to develop a set of resources (and the start of a really horribly messy flowchart) that my team will be able to use for evaluating technology down the road.

I’m just going to copy/paste the posts that I made in the MOOC – with the tools – as I think they explain what I did better than glibly trying to rehash it here on the fly.


 

Four tools for identifying and evaluating educational technology

I’ve been caught up with work things this week so it’s taken me a little while to get back to this assignment but I’m glad as it has enabled me to see the approaches that other people have been taken and clarify my ideas a little.

My biggest challenge is that I started this MOOC with a fairly specific Ed Tech project in mind – identifying the best option in student lecture instant response systems. The assignment however asks us to consider tools that might support evaluating Ed Tech in broader terms and I can definitely see the value in this as well. This has started me thinking that there are actually several stages in this process that would probably be best supported by very different tools.

One thing that I have noticed (and disagreed with) in the approaches that some people have taken has been that the tools seem to begin with the assumption that the type of technology is selected and then the educational /pedagogical strengths of this tool are assessed. This seems completely backwards to me as I would argue that we need to look at the educational need first and then try to map it to a type of technology.

In my case, the need/problem is that student engagement in lectures is low and a possible solution is that the lecturer/teacher would like to get better feedback about how much the students are understanding in real time so that she can adjust the content/delivery if needed.

Matching the educational need to the right tool

When I started working on this I thought that this process required three separate steps – a flowchart to point to suitable types of technology, a checklist to see whether it would be suitable and then a rubric to compare products.

As I developed these, I realised that we also need to clearly identify the teacher’s educational needs for the technology, so I have add a short survey about this here, at the beginning of this stage.

I also think that a flowchart (ideally interactive) could be a helpful tool in this stage of identifying technology. (There is a link to the beginning of the flowchart below)

I have been working on a model that covers 6 key areas of teaching and learning activity that I think could act as the starting point for this flowchart but I recognise that such a tool would require a huge amount of work so I have just started with an example of how this might look. (Given that I have already identified the type of tool that I’m looking at for my project, I’m going to focus more on the tool to select the specific application)

I also recognise that even for my scenario, the starting point could be Communication or Reflection/Feedback, so this could be a very messy and large tool.

The key activities of teaching/learning are:
• Sharing content
• Communication
• Managing students
• Assessment tasks
• Practical activities
• Reflection / Feedback

I have created a Padlet at http://padlet.com/gamerlearner/edTechFlowchart and a LucidChart athttps://www.lucidchart.com/invitations/accept/6645af78-85fd-4dcd-92fe-998149cf68b2 if you are interested in sharing ideas for types of tools, questions or feel like helping me to build this flowchart.

I haven’t built many flowcharts (as my example surely demonstrates) but I think that if it was possible to remove irrelevant options by clicking on sections, this could be achievable.

Is the technology worthwhile?

The second phase of this evaluation lets us look more closely at the features of a type of technology to determine whether it is worth pursuing. I would say that there are general criteria that will apply to any type of technology and there would also need to be specific criteria for the use case. (E.g. for my lecture clicker use case, it will need to support 350+ users – not all platforms/apps will do this but as long as some can, it should be considered suitable)

Within this there are also essential criteria and nice-to-have criteria. If a tool can’t meet the essential criteria then it isn’t fit for purpose, so I would say that a simple checklist should be sufficient as a tool will either meet a need or it won’t. (This stage may require some research and understanding of the available options first). This stage should also make it possible to compare different types of platforms/tools that could address the same educational needs. (In my case, for example, providing physical hardware based “clickers” vs using mobile/web based apps)

This checklist should address general needs which I have broken down by student, teacher and organisational needs that could be applied to any educational need. It should also include scenario specific criteria.

Evaluating products
It’s hard to know exactly what the quality of the tool or the learning experiences will be. We need to make assumptions based on the information that is available. I would recommend some initial testing wherever possible.
I’m not convinced that it is possible to determine the quality of the learning outcomes from using the tool so I have excluded these from the rubric.
Some of the criteria could be applied to any educational technology and some are specifically relevant to the student response / clicker tool that I am investigating.


 

Lecture Response System pitch

This was slightly rushed but it does reflect the results of the actual evaluation that I have carried out into this technology so far. (I’m still waiting to have some questions answered from one of the products)

I have emphasised the learning needs that we identified, looked quickly at the key factors in the evaluation and then presented the main selling points of the particular tool. From there I would encourage the teacher/lecturer to speak further to me about the finer details of the tool and our implementation plan.

Any thoughts or feedback on this would be most welcome.

Edit: I’ve realised that I missed some of the questions – well kind of.
The biggest challenge will be how our network copes with 350+ people trying to connect to something at once. The internet and phone texting options were one of the appealing parts about the tool in this regard, as it will hopefully reduce this number.

Awesome would look like large numbers of responses to poll questions and the lecturer being able to adjust their teaching style – either re-explaining a concept or moving to a new one – based on the student responses.


 

These are the documents from these two assignments.

Lecture Response systemsPitch ClickersEdTechEvaluationRubric EducationalTechnologyNeedsSurvey ColinEducation Technology Checklist2

Week 1 of the 11.133x MOOC – tick.

MOOCs often take a little while to ramp up but the MITx 11.133x Implementation yada yada – I’m going to stick with 11.133x for now, that’s a long title – MOOC feels like it’s just about right.

There has been a fair whack of the standard common sense in the videos so far – have a purpose, don’t choose the technology before you know what you want to do with it. stakeholders matter etc – but it has been well presented and by a range of different people.

There has probably been more emphasis on looking at ed tech for K-12 schools rather than higher education than I like but I guess it is a larger chunk of the audience. The ability to form/join affinity groups in the forums has at least let me connect with other uni people around the world.

In terms of practical activities, it has really REALLY helped to come to this MOOC with a project in mind. I’m looking for a live student response/feedback tool (most likely web/app based) that can be used in lectures (large lectures 350+) to poll students about their understanding of content.

This fed well into our first two activities, which involved looking at the context that this project will occur in and considering whether it sits in a general or specific domain and whether it will change procedure or instruction. (I’ll post my responses to both below)

Responding to other posts – you need to respond to at least three to complete the module – helps to clarify some of the concepts. I have a feeling that this isn’t a huge MOOC either – there aren’t hundreds of pages of responses in the forums to each question which is often kind of hellish to process.

Profile your implementation context

Target environment
I work in the College of Business and Economics in a leading Australian university. We’re relatively well resourced, so buying new tech generally isn’t an issue within reason, which allows me to focus on the suitability of the tool. We have large numbers of international students in our undergraduate cohort. The majority of students are comfortable with mobile and online technology. At an undergraduate level, the students tend to be young adults.

The college is comparatively conservative in some ways – although fortunately our leadership understands and supports the value of innovation. There is an emphasis placed on the seriousness and prestige of our brand that I need to factor into the look and feel of college associated tools.
There is a range of acceptance and engagement with learning technology from academics in the college, from enthusiasm to resistance to change. (Last week I had a long conversation with someone about why he still needs an overhead projector – we’re getting him one)
Our largest lecture theatres can hold up to 600 people (which is big for us) and the college wi-fi has recently been upgraded.

Key stakeholder

Recently one of our finance lecturers contacted me – I’m the learning technology person for the college – and asked what we have in the way of live student response/feedback systems. Tools that will enable her to post survey/understanding questions on screen during lectures and get real-time responses from students via mobile/web apps.

She is relatively new to the college and lectures to a group of 350+ students. (This is relatively large for us although some of our foundation subjects have 800+ students). She is keen to enhance the interactivity of her lectures but is also concerned about finding the right tool. She really doesn’t want any technology failures during her lectures as she believes that this will kill student trust in this kind of technology. She would also prefer not to trial multiple tools on her students as she is concerned that experimenting on students may diminish their educational experience.

Potential for the technology
There has been a lot of ongoing discussion at the university in recent years about the effectiveness of lectures. Attendance rates are around 30% in many disciplines, due to student work/life commitments, recording of lectures and a host of other reasons.

The lecture format itself is questioned however it is deeply ingrained in many parts of the culture so finding ways to augment and enhance the lecture experience seems like a more effective approach.
Student response/feedback apps can be a powerful way to instantly track understanding and engagement of material in lectures and I am keen to see what we can do with it. While some students may feel confident to ask questions in a lecture, others may feel uncomfortable with this from cultural perspectives or due to English being a second language.

The lecturer has already been in contact with a supplier of a particular platform, however I have some reservations as on a preliminary investigation, their product appears to provide much more functionality than might be needed and may be unnecessarily complicated. However, I’m hoping that this MOOC will help me to work through this process.

Domain / Approach Chart

Socrative

This seems like a bit of a cop-out given that the example given was PollEverywhere but if you check myprevious post, you’ll see that I’m looking for a tool to use for live student feedback in lectures.

Socrative is one of several tools that considering to meet this need. It is a basic, online tool that enables a teacher to create a quiz/survey question, show it to the class through a data projector and then get the students to respond to (generally multichoice) via an app on their phone or a web browser.

Of the ones that I’ve seen so far, it’s easy to set up and seems to work fairly well. (I blogged a comparison between it and Kahoot a while ago)

I’d say that it is Domain General because it can be used widely and it is more about changing an approach to procedure, because without it, a teacher could just ask for a show of hands instead. (This I think will get a better response though because it is less embarrassing)

My main concern with Socrative for my project is that the website says that it is best used with classes of 50 or less and I am looking for something that supports 350+

I swear this MOOC is going to stick

Completion rates for MOOCs are ridiculously low – and my completion rate specifically is appalling. I did successfully complete Kevin Werbach’s Coursera MOOC on Gamification (which I can recommend although it is business not education focussed) but aside from that there has been a long string of MOOCs that I have signed up for and then slunk away from after a week or two. Most recently this includes ANU’s Edx MOOC on Ignorance. Why did I sign up for that? No idea.

The new MITx (Edx) MOOC however seems like it was made for me. Implementation and Evaluation of Educational Technology.  It starts today, so if you’re interested, there’s still time to get on board.

Edx course title screenshot

This is a MOOC that ties directly to my work as a learning technologist and for which I even have a learning outcome / project in mind. I’ve been asked to find a good in-class instant response system (polling/multi-choice) to get better live learner feedback in lectures.

I’ve also read the research indicating that people who pay a small fee are far more likely to complete a MOOC than average participants, so I’ve signed up for a verified certificate.

Now I think I might try to find some study buddies to ratchet up the pressure a little further.

How do you stay motivated in a MOOC? What is your complete/abandon ratio like?

 

Delivering DDLR & DDeLR: Reflections

Life got quite busy in the last few weeks, so screenface had to go on the back-burner for a little while. I think it’s worth taking a look at what happened with the DDLR & DDeLR (Design & Develop Learning/eLearning Resources) subjects and what I might do with them next time.

What happened?

The majority of teachers taking the DDLR subjects have a reasonable expectation that this is a class where they will be able to develop some rich skills in using our eLearning platform to make new things for their students.

The units and elements of competency however are heavily focused on a design and development process for learning resources. The assessments for the teachers (who are the students in this case) hinge on providing evidence that they have considered the characteristics of their cohorts and mapped out a plan for whatever resource they are building. (This should include documenting necessary materials, sources of support and risk planning possible contingencies that may arise). They then need to create the resource, test it with peers or students and make refinements to it before final implementation.

All in all, sensible practice and (I assume) something that most teachers already do as a matter of course in their teaching practice. (Whether or not they formally name the steps in the process is another matter)

What the units and elements of competency don’t particularly care about is what the teachers learn about in terms of usability, readability, general design principles and, of course, the use of a range of new technological tools to get it all done. (Which is what they are most interested in addressing)

So we already have tensions built into the subject in the conflict between what the teachers want and need and what they have to demonstrate and be assessed on.

While we started with a full house of 14 people on the first day, numbers quickly dwindled to a dedicated core of 6 – 7. (A number of factors came to play here including personal issues for a couple of the cohort and running this subject at the very end of semester, when these teachers are themselves inundated with their own grading and teaching responsibilities)

For those that remained, we were able to provide what I hope was an engaging range of activities and training in design principles for usability, copyright and the use of our eLearning platform. (I was well supported by a member of my team – Jo – who also kindly filled in for me when I was away).

Assessment items have been slow to come in – possibly due to the onerous nature of evidence requirements for the subjects. Learners are required to provide 4 draft learning resources (with accompanying design documents and student group profiles) of which 2 are then tested and refined into final learning resources.

I tried to streamline this process in the first week by having the class work on a draft learning resource in the first week – a checklist that might be used to test the quality of their other learning resources. There has been a fair amount of confusion about this and I need to consider whether it is worth trying again and also how I go about explaining the concept.

The idea was to get the class thinking about important qualities in their learning resources and also to get some more buy-in in their own assessments, by effectively designing part of their grading tool. (This is not a graded subject but my intention was that by having them use their learning resource checker on their other resources, they would be more mindful of issues relating to pedagogy, content and technology.

What have I learned?

I need to lower my expectations of what can be achieved in the first lesson. We were beset with technical and enrollment questions that disrupted my carefully planned series of tasks and activities.

I had also put too much faith in the technical skills of the cohort and their ability to effectively use our LMS. I tried to do too many clever things – setting up conditional release on activities so that the learners could only access certain activities or resources after completing others.

I didn’t provide sufficient information about how the class might submit assessment items which were from their own development courses in our LMS. The assessments were set up as an assignment dropbox to receive files. I ended up telling people to create a word document with some screenshots and a link to the resources that they had created but this should have been explicitly stated in the assessment instructions.

I am happy that I was able to be flexible enough with the course to ask the learners what tools they were most interested in learning about and reshaping the course to accommodate this. A core principle of adult learning is that adults need to see the value in what they are being taught and this was an easy way to achieve this.

I’ve been able to speak to the previous teacher of this subject and she also struggled with a number of these issues – hopefully input from a wider group of colleagues might offer some solutions.