Category Archives: quantitative

Thoughts on: Project Management in Instructional Design (Allen, 2020)

Reading this thesis was valuable because it showed how stark the difference can be between them. Where Amparo took a very qualitative, narrative driven approach to the research going narrow but deep with 3 people and case studies, Allen goes shallow but broad with this quantitative research into the most valued project management competencies for instructional designers (IDs).

Allen takes a deep dive into a comparatively rich pool of literature relating to the project management skills that best serve IDs, generating a comprehensive literature review that sums up the last decade or so of research in this space very effectively. She builds on it by conducting her own two stage survey of 86 IDs in a range of sectors (Higher Ed, corporate, ID project team leaders) to gather some rich quant data.

Nothing overly surprising emerges in terms of the favoured competencies, with the data largely aligning with the studies that had come before (but at a larger scale) but it did spark a few thoughts for me about my own work. Some of the competencies across the literature I personally found a little nebulous – things like “attention to detail” which are certainly valuable professional attributes but, coming from a competency based education background, I was curious about how that might be meaningfully measured or taught. This brought me back to realising that I need to think carefully about what practices, attributes and competencies mean in the data I am gathering.

The painstaking detail in the writing about the work undertaken, from the lit review to the data collection and analysis offered a useful benchmark for my own future writing.

It was also useful to scan the references and find a few promising leads that I’ve previously missed. These include:

Kenny, J. (2004). A study of educational technology project management in Australian universities. Australasian Journal of Educational Technology, 20(3), 388–404.

Allen, M. (1996). A profile of instructional designers in Australia. Distance Education 17(1), 7–32.

I don’t think I had thought to search for IDs in the Australian literature.

Allen, S. A. (2020). PROJECT MANAGEMENT IN INSTRUCTIONAL DESIGN.

SOCRMx Week #7: Qualitative analysis

I’m nearly at the end of Week #8 in the Social Research Methods MOOC and while I’m still finding it informative, I’ve kind of stopped caring. The lack of community and particularly of engagement from the teachers has really sucked the joy out of this one for me. If the content wasn’t highly relevant, I’d have left long ago. And I’ll admit, I haven’t been posting the wonderfully detailed and thoughtful kind of posts on the forum or in the assigned work that they other 5 or so active participants have been doing but I’ve been contributing in a way that supports my own learning. I suspect the issue is that this is being run as a formal unit in a degree program and I’m not one of those students. Maybe it’s that I chose not to fork over the money for a verified certificate. Either way, it’s been an unwelcoming experience overall. When I compare it to the MITx MOOC I did a couple of years ago on Implementing Education Technology, it’s chalk and cheese. Maybe it’s a question of having a critical mass of active participants, who knows. But as I say, at least the content has been exactly what I’ve needed at this juncture of my journey in learning to be a researcher.

This week the focus was on Qualitative Analysis, which is where I suspect I’ll being spending a good amount of my time in the future. One of my interesting realisations early on in this though was that I’ve already tried to ‘cross the streams’ of qual and quant analysis this year when I had my first attempt at conducting a thematic analysis of job ads for edvisors. I was trying to identify specific practices and tie them to particular job titles in an attempt to clarify what these roles were largely seen to be doing. So there was coding because clearly not every ad was going to say research, some might say ‘stay abreast of current and emerging trends’ and other might ask the edvisor to ‘evaluate current platforms’. Whether or not that sat in “research” perfectly is a matter for discussion but I guess that’s a plus of the fuzzy nature of qualitative data, where data is more free to be about the vibe.

But then I somehow ended up applying numbers to the practices as they sat in the job ad more holistically, in an attempt to place them on a spectrum between pedagogical (1) and technological (10). Which kind of worked in that it gave me some richer data that I could use to plot the roles on a scattergraph but I wouldn’t be confident that this methodology would stand up to great scrutiny yet. Now maybe just because I was using numbers it doesn’t mean that it was quantitative but it still feels like some kind of weird fusion of the two. And I’m sure that I’ll find any number of examples of this in practice but I haven’t seen much of this so far. I guess it was mainly nice to be able to put a name to what I’d done. To be honest, as I was initially doing it, I assumed that there was probably a name for what I was doing and appropriate academic language surrounding it, I just didn’t happen to know what that was.

I mentioned earlier that qualitative analysis can be somewhat ‘fuzzier’ than quantitative and there was a significant chunk of discussion at the beginning of this week’s resources about that. Overall I got the feeling that there was a degree of defensiveness, with the main issue being that the language and ideas used in quantitative research are far more positivist in nature – epistemologically speaking (I totally just added that because I like that I know this now) – and are perhaps easier to justify and use to validate the data. You get cold hard figures and if you did this the right way, someone else should be able to do exactly the same thing.

An attempt to map some of those quantitative qualities to the qualitative domain was somewhat poo-pooed because it was seen as missing the added nuance present in qualitative research or something – it was a little unclear really but I guess I’ll need to learn to at least talk the talk. It partly felt like tribalism or a turf war but I’m sure that there’s more to it than that.  I guess it’s grounded in a fairly profoundly different way of seeing the world and particularly of seeing ‘knowing’. On the one side we have a pretty straight forward set of questions dealing with objective measurable reality and on the other we have people digging into perspectives and perceptions of that reality and questioning whether we can ever know or say if any of them are absolutely right.

Long story short, there’s probably much more contextualisation/framing involved in the way you analyse qual data and how you share the story that you think it tells. Your own perceptions and how they may have shaped this story also play a far more substantial part. The processes that you undertook – including member checking, asking your subject to evaluate your analysis of their interview/etc to ensure that your take reflects theirs – also play a significant role in making your work defensible.

The section on coding seemed particular relevant so I’ll quote that directly:

Codes, in qualitative data analysis, are tags that are applied to sections of data. Often done using qualitative data analysis software such as Nvivo or Dedoose.

Codes can overlap, and a section of an interview transcript (for example) can be labeled with more than one code. A code is usually a keyword or words that represent the content of the section in some way: a concept, an emotion, a type of language use (like a metaphor), a theme.

Coding is always, inevitably, an interpretive process, and the researcher has to decide what is relevant, what constitutes a theme and how it connects to relevant ideas or theories, and discuss their implications.

Here’s an example provided by Jen Ross, of a list of codes for a project of hers about online reflective practice in higher education. These codes all relate to the idea of reflection as “discipline” – a core idea in the research:

  • academic discourse
  • developing boundaries
  • ensuring standards
  • flexibility
  • habit
  • how professionals practice
  • institutional factors
  • self assessment

Jen says: These codes, like many in qualitative projects, emerged and were refined during the process of reading the data closely. However, as the codes emerged, I also used the theoretical concepts I was working with to organise and categorise them. The overall theme of “discipline”, therefore, came from a combination of the data and the theory.

https://courses.edx.org/courses/course-v1:EdinburghX+SOCRMx+3T2017/courseware/f41baffef9c14ff488165814baeffdbb/23bec3f689e24100964f23aa3ca6ee03/?child=last

I already mentioned that I undertake thematic analysis of a range of job ads, which could be considered to be “across case” coding. This is in comparison to “within-case” coding, where one undertakes narrative analysis by digging down into one particular resource or story. This involves “tagging each part of the narrative to show how it unfolds, or coding certain kinds of language use” while thematic analysis is about coding common elements that emerge while looking at many things. In the practical exercise – I didn’t do it because time is getting away from me but I read the blog posts of those who did – a repeated observation was that in this thematic analysis, they would often create/discover a new code half way through and then have to go back to the start to see if and where that appear in the preceding resources.

On a side note, the practical activity did look quite interesting, it involved looking over a collection of hypothetical future reflections from school leavers in the UK in the late 1970s. They were asked to write a brief story from the perspective of them 40 years in the future, on the cusp of retirement, describing the life they had lived. Purely as a snapshot into the past, it is really worth a look for a revealing exploration of how some people saw life and success back in the day.Most of the stories are only a paragraph or two.

https://discover.ukdataservice.ac.uk/QualiBank/?f=CollectionTitle_School%20Leavers%20Study

And once again, there were a bunch of useful looking resources for further reading about qualitative analysis

  • Baptiste, I. (2001). Qualitative Data Analysis: Common Phases, Strategic Differences. Forum: Qualitative Social Research, 2/3. http://www.qualitative-research.net/index.php/fqs/article/view/917/2002
  • Markham, A. (2017). Reflexivity for interpretive researchers http://annettemarkham.com/2017/02/reflexivity-for-interpretive-researchers/
  • ModU (2016). How to Know You Are Coding Correctly: Qualitative Research Methods. Duke University’s Social Science Research Unit. https://www.youtube.com/watch?v=iL7Ww5kpnIM
  • Riessman, C.K. (2008). ‘Thematic Analysis’ [Chapter 3 preview] in Narrative Methods for the Human Sciences. SAGE Publishing https://uk.sagepub.com/en-gb/eur/narrative-methods-for-the-human-sciences/book226139#preview Sage Research Methods Database
  • Sandelowski, M. and Barroso, J. (2002). Reading Qualitative Studies. International Journal of Qualitative Methods, 1/1. https://journals.library.ualberta.ca/ijqm/index.php/IJQM/article/view/4615
  • Samsi, K. (2012). Critical appraisal of qualitative research. Kings College London. https://www.kcl.ac.uk/sspp/policy-institute/scwru/pubs/2012/conf/samsi26jul12.pdf
  • Taylor, C and Gibbs, G R (2010) How and what to code. Online QDA Web Site, http://onlineqda.hud.ac.uk/Intro_QDA/how_what_to_code.php
  • Trochim, W. (2006). Qualitative Validity. https://www.socialresearchmethods.net/kb/qualval.php

Week #6 SOCRMx – Quantitative analysis

This section of the SOCRMx MOOC offers a fair introduction to statistics and the analysis of quantitative date. At least, enough to get a grasp on what is needed to get meaningful data and what it looks like when statistics are misused or misrepresented. (This bit in particular should be a core unit in the mandatory media and information literacy training that everyone has to take in my imaginary ideal world)

The more I think about my research, the more likely I think it is to be primarily qualitative but I can still see the value in proper methodology for processing the quant data that will help to contextualise the rest. I took some scattered notes that I’ll leave here to refer back to down the road.

Good books to consider – Charles Wheelan: Naked Statistics: Stripping the dread from data (2014) & Daniel Levitin: A Field Guide to Lies and Statistics: A Neuroscientist on How to Make Sense of a Complex World (2016)

Mean / Median / Mode

Mean – straightforward average.

Median – put all the results in a line and choose the one in the middle. (Better for average incomes as high-earners distort the figures)

Mode – which section has the most hits in it

Student’s T-Test – a method for interpreting what can be extrapolated from a small sample of data. It is the primary way to understand the likely error of an estimate depending on your sample size

It is the source of the concept of “statistical significance.”

A P-value is a probability. It is a measure of summarizing the incompatibility between a particular set of data and a proposed model for the data (the null hypothesis). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5366529/

“a significance level is an indication of the probability of an observed result occurring by chance under the null hypothesis; so the more you repeat an experiment, the higher the probability you will see a statistically significant result.”

Overall this entire domain is one where I think I’m only really going to appreciate the core concepts when I have a specific need for it. The idea of a distribution curve where the mean of all data points represents the high point and standard deviations (determined by a formula) show us the majority of the other data points seems potentially useful but, again, until I can practically apply it to a problem, just tantalisingly beyond my grasp.