Monthly Archives: August 2015

Week 4 of the 11.133x MOOC – Bringing it on home

The final week (ok 2 weeks) of the MITx – Implementation and Evaluation of Educational Technology MOOC – is now done and dusted and it’s time for that slight feeling of “what do I do now?” to kick in.

This final section focused on sound evaluation processes – both formative and summative – during and after your ed tech implementation. This whole MOOC has had a very smooth, organic kind of flow and this brought it to a very comfortable conclusion.

Ilona Holland shared some particularly useful ideas about areas to emphasise in the evaluation stage: appeal (engagement), interest (sparking a desire to go further), comprehension, pace and usability. She and David Reider clarified the difference between evaluation and research – largely that in an evaluation you go in without a hypothesis and just note what you are seeing.

In keeping with the rest of these posts, I’ll add the assignment work that I did for this final unit as well as my overall reflections. Spoiler alert though, if you work with educational technology (and I assume you do if you are reading this blog), this is one of the best online courses that I’ve ever done and I highly recommend signing up for the next one.


 

Assessment 4 – Evaluation process.

  1. Decide why you are evaluating. Is it just to determine if your intervention is improving learner’s skills and/or performance? Is it because certain stakeholders require you to?

We will evaluate this project because it is an important part of the process of implementing any educational technology. We need to be confident that this project is worth proceeding with at a larger scale. It will also provide supporting evidence to use when approaching other colleges in the university to share the cost of a site-wide license.

  1. Tell us about your vision of success for the implementation. This step is useful for purposes of the course. Be specific. Instead of saying “All students will now be experts at quadratic equations,” consider if you would like to see a certain percentage of students be able to move more quickly through material or successfully complete more challenging problems.

Our goal in using PollEverywhere in lectures is to increase student engagement and understanding and to reduce the number of questions that students need to ask the lecturer after the lecture.

A secondary goal would be to increase the number of students attending lectures.

Engagement seems like a difficult thing to quantify but we could aim for a 10% increase in average student grades in assessments based on lecture content. We could also aim for lecturers receiving 10% fewer student questions during the week about lecture content. A 20% increase in attendance also would be a success.

  1. Generate questions that will guide the evaluation. What do you need and want to know regarding the efficacy of your implementation? Are there questions that other stakeholders care about that should also be included? Think about your desired goals and outcomes for the implementation.

Questions for students:

I find lectures engaging
I am more likely to attend lectures now because of the use of PollEverywhere
I find PollEverywhere easy to use
PollEverywhere works reliably for me
The use of PollEverywhere feedback in lectures has helped deepen my understanding of the content

Questions for lecturers:

I have found PollEverywhere easy to use
PollEverywhere works reliably for me in lectures
PollEverywhere has helped me evaluate and adjust my lectures
Fewer students ask me questions between lectures since I started using PollEverywhere
Students seem more engaged now

  1. Determine what data and information you need to address the questions and how you will collect it.This could be qualitative or quantitative. You might consider observing teachers and students in action or conducting surveys and interviews. You might look at test performance, participation rates, completion rates, etc. It will depend on what is appropriate for your context.

Pre-use survey of students relating to engagement in lectures and their attitudes towards lectures
Observation of classes using PollEverywhere in lectures and student activity/engagement
Lecture attendance numbers?
Use of PollEverywhere near the end of lectures to gather student feedback
Comparison of assessment grade averages
Feedback from students in tutorials
University SELS (Student Experience of Learning Support) and SET (Student Experience of Teaching) surveys
Data derived directly from Poll results

  1. Consider how you would communicate results and explain if certain results would cause you to modify the implementation. In a real evaluation, you would analyze information and draw conclusions. Since your course project is a plan, we will skip to this step.

The quantitative data (changes in grades, results from polls in lectures, student surveys, attendance estimates) could be collated and presented in a report for circulation around the college. We could also make a presentation at our annual teaching and learning day – which could incorporate use of the tool.

Qualitative data could be built into case studies and a guide to the practical use of the tool.

Evidence emerging during the trial period could be acted on quickly by discussing alternatives with the pilot group and making changes to the way that the tool is used. This might include changing the phrasing of questions, requesting that students with twitter access use this option for responding to the poll or exploring alternative methods of displaying the PollEverywhere results (if PowerPoint is problematic)

Part 5: Reflection

What was difficult about creating your plan? What was easy?

Generally speaking, coming up with the plan overall was a fairly painless experience. The most complicated part was developing tools to identify and evaluate the most appropriate options. This was because the guest speakers gave me so many ideas that it took a while to frame them in a way that made sense to me and which offered a comprehensive process to work through. (This ended up being 3-4 separate documents but I’m fairly happy with all of them as a starting point).

As with all of the activities, once I had discovered the approach that worked for me and was able to see how everyone else was approaching the question, things seemed to fall into place fairly smoothly.

What parts of the course were the most helpful? Why? Did you find certain course materials to be especially useful?

I think I have a fairly process oriented way of thinking – I like seeing how things fit together and how they relate to the things that come before and after. So the sections that dug down into the detail of processes – section 2 and section 4 with the evaluation plans – appealed the most to me.

I can understand the majority of people working with education technology are in the K-12 area and so it makes sense this is where many of the guest experts came from but this did sometimes seem slightly removed from my own experiences. I had to do a certain amount of “translating” of ideas to spark my own ideas.

What about peer feedback? How did your experiences in the 11.133x learning community help shape your project?

Peer feedback was particularly rewarding. A few people were able to help me think about things in new ways and many were just very encouraging. I really enjoyed being able to share my ideas with other people about their projects as well and to see a range of different approaches to this work.

General observations

I’ve started (and quit) a few MOOCs now and this was easily the most rewarding. No doubt partially because it has direct relevance to my existing work and because I was able to apply it in a meaningful way to an actual work task that happened to come up at the same time.

I had certain expectations of how my project was going to go and I was pleased that I ended up heading in a different direction as a result of the work that we did. This work has also helped equip me with the skills and knowledge that I need to explain to a teacher why their preferred option isn’t the best one – and provide a more feasible alternative.

While it may not necessarily work for your EDx stats, I also appreciated the fact that this was a relatively intimate MOOC – it made dealing with the forum posts feel manageable. (I’ve been in MOOCs where the first time you log in you can see 100+ pages of Intro posts and this just seems insurmountable). It felt more like a community.

I liked the idea of the interest groups in the forum (and the working groups) but their purpose seemed unclear (beyond broad ideals of communities of practice) and after a short time I stopped visiting. (I also have a personal preference for individual rather than group work, so that was no doubt a part of this)

I also stopped watching the videos after a while and just read the transcripts as this was much faster. I’d think about shorter, more tightly edited videos – or perhaps shorter videos for conceptual essentials mixed with more conversational case-study videos (marked optional)

Most of the events didn’t really suit my timezone (Eastern Australia) but I liked that they were happening. The final hangout did work for me but I hadn’t had a chance to work on the relevant topic and was also a little caught up with work at the time.

All in all though, great work MOOC team and thanks.

(I also really appreciated having some of my posts highlighted – it’s a real motivator)

Week 3 of the 11.133x MOOC – On fire

In comparison to last week, I ploughed through this weeks’ work. Probably because I have a separate education tech training event on this week – the ACODE Learning Technology Teaching Institute – and wanted to clear the decks to give this due focus.

This week in MOOC land saw us looking at the evaluation phase of a project. Once more, the videos and examples were a little (ok entirely) focussed on K-12 examples (and a detour into ed tech startups that was slightly underwhelming) but there were enough points of interest to keep my attention.

I even dipped back into GoAnimate for one of the learning activities, which was fun.

The primary focus was on considering barriers to success before starting an implementation and it did introduce me to a very solid tool created by Jennifer Groff of MIT that provides a map of potential barriers across 6 areas. Highly recommend checking it out.

Groff, Jennifer, and Chrystalla Mouza. 2008. “A Framework for Addressing Challenges to Classroom Technology Use.” AACE Journal 16 (1): 21-46.

As for my contributions, as ever, I’ll add them now.


 

Barriers and Opportunities for using PollEverywhere in uni lectures. 

There are a number of potential barriers to this success of this project, however I believe that several of them have been considered and hopefully addressed in the evaluation process that led us to our choice of PollEverywhere. One of them only came up on Friday though and it will be interesting to see how it plays out.

1 School – Commercial Relationships

Last week I learnt that a manager in the school that has been interested in this project (my college is made up of four schools) has been speaking to a vendor and has arranged for them to come and make a presentation about their student/lecture response tool. (Pearson – Learning Catalytics). Interestingly this wasn’t a tool on my radar in the evaluation process – it didn’t come up in research at all. A brief look at the specs for the tool (without testing) indicates though that it doesn’t meet several of our key needs.

I believe that we may be talking to this vendor about some of their other products but I’m not sure what significance this has in our consideration of this specific product. The best thing that I can do is to put the new product through the same evaluation process as the one that I have selected and make the case based on selection criteria. We have also purchased a license for PollEverywhere for trialling, so this project will proceed anyway. We may just need to focus on a pilot group from other schools.

2 School – Resistance to centralisation

Another potential obstacle may come from one of our more fiercely independent schools. They have a very strong belief in maintaining their autonomy and “academic freedom” and have historically resisted ideas from the college.

There isn’t a lot that can be done about this other than inviting them to participate and showcasing the results after the piloting phase is complete.

3 School – network limitations

This is unfortunately not something that we can really prepare for. We don’t know how well our wireless network will support 300+ students trying to access a site simultaneously. This was a key factor in the decision to use a tool that enables students to participate via SMS/text, website/app and Twitter.

We will ask lecturers to encourage students to used varied submission options. If the tool proves successful, we could potentially upgrade the wireless access points.

4 Teacher – Technical ability to use the tool

While I tried to select a tool that appears to be quite user-friendly, there are still aspects of it that could be confusing. In the pilot phase, I will develop detailed how-to resources (both video and print) and provide practical training to lecturers before they use the tools.

5 Teacher – Technical

PollEverywhere offers a plug-in that enables lecturers to display polls directly in their PowerPoint slides. Lecturers don’t have permission to install software on their computers, so I will work with our I.T team to ensure that this is made available.

6 Teacher – Pedagogy

Poorly worded or times questions could reduce student engagement. During the training phase of the pilot program, I will discuss the approach that the teacher intends to take in their questions. (E.g. consider asking – did I explain that clearly VS do you understand that)

Opportunities

Beyond the obvious opportunity to enhance student engagement in lectures, I can see a few other potential benefits to this project.

Raise the profile of Educational technology

A successful implementation of a tool that meshes well with existing practice will show that change can be beneficial, incremental and manageable.

Open discussion of current practices

Providing solid evidence of improvements in practices may offer a jumping off point for wider discussion of other ways to enhance student engagement and interaction.

Showcase and share innovative practices with other colleges

A successful implementation could lead to greater collegiality by providing opportunities to share new approaches with colleagues in other parts of the university.

Timeline

This isn’t incredibly detailed yet but is the direction I am looking at. (Issues in brackets)

    Develop how-to resources for both students and lecturers(3)
    Identify pilot participants (1,2)
    Train / support participants (3,4,6)
    Live testing in lectures (5)
    Gather feedback and refine
    Present results to college
    Extend pilot (repeat cycle)
    Share with university

 

Oh and here is the GoAnimate video. (Don’t judge me)

http://goanimate.com/videos/0e8QxnJgGKf0?utm_source=linkshare&utm_medium=linkshare&utm_campaign=usercontent

Week 2 of the 11.133x MOOC – getting things done. (Gradually)

The second week (well fortnight really) of the 11.133x MOOC moved us on to developing some resources that will help us to evaluate education technology and make a selection.

Because I’m applying this to an actual project (two birds with one stone and all) at work, it took me a little longer than I’d hoped but I’m still keeping up. It was actually fairly enlightening because the tool that I had assumed we would end up using wasn’t the one that was shown to be the most appropriate for our needs. I was also able to develop a set of resources (and the start of a really horribly messy flowchart) that my team will be able to use for evaluating technology down the road.

I’m just going to copy/paste the posts that I made in the MOOC – with the tools – as I think they explain what I did better than glibly trying to rehash it here on the fly.


 

Four tools for identifying and evaluating educational technology

I’ve been caught up with work things this week so it’s taken me a little while to get back to this assignment but I’m glad as it has enabled me to see the approaches that other people have been taken and clarify my ideas a little.

My biggest challenge is that I started this MOOC with a fairly specific Ed Tech project in mind – identifying the best option in student lecture instant response systems. The assignment however asks us to consider tools that might support evaluating Ed Tech in broader terms and I can definitely see the value in this as well. This has started me thinking that there are actually several stages in this process that would probably be best supported by very different tools.

One thing that I have noticed (and disagreed with) in the approaches that some people have taken has been that the tools seem to begin with the assumption that the type of technology is selected and then the educational /pedagogical strengths of this tool are assessed. This seems completely backwards to me as I would argue that we need to look at the educational need first and then try to map it to a type of technology.

In my case, the need/problem is that student engagement in lectures is low and a possible solution is that the lecturer/teacher would like to get better feedback about how much the students are understanding in real time so that she can adjust the content/delivery if needed.

Matching the educational need to the right tool

When I started working on this I thought that this process required three separate steps – a flowchart to point to suitable types of technology, a checklist to see whether it would be suitable and then a rubric to compare products.

As I developed these, I realised that we also need to clearly identify the teacher’s educational needs for the technology, so I have add a short survey about this here, at the beginning of this stage.

I also think that a flowchart (ideally interactive) could be a helpful tool in this stage of identifying technology. (There is a link to the beginning of the flowchart below)

I have been working on a model that covers 6 key areas of teaching and learning activity that I think could act as the starting point for this flowchart but I recognise that such a tool would require a huge amount of work so I have just started with an example of how this might look. (Given that I have already identified the type of tool that I’m looking at for my project, I’m going to focus more on the tool to select the specific application)

I also recognise that even for my scenario, the starting point could be Communication or Reflection/Feedback, so this could be a very messy and large tool.

The key activities of teaching/learning are:
• Sharing content
• Communication
• Managing students
• Assessment tasks
• Practical activities
• Reflection / Feedback

I have created a Padlet at http://padlet.com/gamerlearner/edTechFlowchart and a LucidChart athttps://www.lucidchart.com/invitations/accept/6645af78-85fd-4dcd-92fe-998149cf68b2 if you are interested in sharing ideas for types of tools, questions or feel like helping me to build this flowchart.

I haven’t built many flowcharts (as my example surely demonstrates) but I think that if it was possible to remove irrelevant options by clicking on sections, this could be achievable.

Is the technology worthwhile?

The second phase of this evaluation lets us look more closely at the features of a type of technology to determine whether it is worth pursuing. I would say that there are general criteria that will apply to any type of technology and there would also need to be specific criteria for the use case. (E.g. for my lecture clicker use case, it will need to support 350+ users – not all platforms/apps will do this but as long as some can, it should be considered suitable)

Within this there are also essential criteria and nice-to-have criteria. If a tool can’t meet the essential criteria then it isn’t fit for purpose, so I would say that a simple checklist should be sufficient as a tool will either meet a need or it won’t. (This stage may require some research and understanding of the available options first). This stage should also make it possible to compare different types of platforms/tools that could address the same educational needs. (In my case, for example, providing physical hardware based “clickers” vs using mobile/web based apps)

This checklist should address general needs which I have broken down by student, teacher and organisational needs that could be applied to any educational need. It should also include scenario specific criteria.

Evaluating products
It’s hard to know exactly what the quality of the tool or the learning experiences will be. We need to make assumptions based on the information that is available. I would recommend some initial testing wherever possible.
I’m not convinced that it is possible to determine the quality of the learning outcomes from using the tool so I have excluded these from the rubric.
Some of the criteria could be applied to any educational technology and some are specifically relevant to the student response / clicker tool that I am investigating.


 

Lecture Response System pitch

This was slightly rushed but it does reflect the results of the actual evaluation that I have carried out into this technology so far. (I’m still waiting to have some questions answered from one of the products)

I have emphasised the learning needs that we identified, looked quickly at the key factors in the evaluation and then presented the main selling points of the particular tool. From there I would encourage the teacher/lecturer to speak further to me about the finer details of the tool and our implementation plan.

Any thoughts or feedback on this would be most welcome.

Edit: I’ve realised that I missed some of the questions – well kind of.
The biggest challenge will be how our network copes with 350+ people trying to connect to something at once. The internet and phone texting options were one of the appealing parts about the tool in this regard, as it will hopefully reduce this number.

Awesome would look like large numbers of responses to poll questions and the lecturer being able to adjust their teaching style – either re-explaining a concept or moving to a new one – based on the student responses.


 

These are the documents from these two assignments.

Lecture Response systemsPitch ClickersEdTechEvaluationRubric EducationalTechnologyNeedsSurvey ColinEducation Technology Checklist2