Category Archives: Analysis

Research update #61: The massive data dump

I have a reason for not having posted for a little while, for a change. I’ve been swimming in the data from my first survey, learning about statistics and stats tools and generally working to get my head around what the survey tells me about the edvisor landscape, how our work is perceived and valued and what activities/knowledge might be connected more to one of the three roles than others.

I’m in the process of putting together a document capturing the many facets of this data – some points stand out more than others – and it is currently around 192 pages. This includes a LOT of bar charts and I’m still pulling everything together to be able to go through and try my hand at analysing things down to a few pithy pages for the thesis.

Meanwhile, I had a slot in the most recent TELedvisors webinar, among some people that I’d like to consider peers also conducting research in this space – Evonne Irwin (Uni of Newcastle), Natalia Veles (James Cook University) and Karin Barac (Griffith Uni). (To be honest though, their presentations all meshed together theory and practice so cleverly that peer feels slightly aspirational)

I think that because I have spent so much time in recent months with this data, and so little time discussing it with anyone, I got a bit caught up and turned my 10 mins into something of a data dump. The information was there but not so much the discussion about why it mattered. Something to consider more next time.

Anyway, for what it’s worth, here’s what I had to say.

SOCRMx Week #8: The End

Well I probably said all that I needed to say on my general feelings about this MOOC in my last post so this is largely for the sake of completion. The final week of this course is a peer assessed piece of writing analysing the methods used in a sample paper. Turns out that I missed the deadline to write that – I may even have been working on my Week 7 post when that deadline fell – so this appears to be the end of the road for me. I could still go through and do the work but I found the supplied paper unrelated to my research and using methodologies that I have little interest in. The overall questions raised and things to be mindful of in the assessment instructions are enough.

  • What method of analysis was used?
  • How was the chosen method of analysis appropriate to the data?
  • What other kinds of analysis might have been used?
  • How was the analysed designed? Is the design clearly described? What were its strengths and weaknesses?
  • What kind of issues or problems might one identify with the analysis?
  • What are the key findings and conclusions, and how are they justified through the chosen analysis techniques?

And so with that, I guess I’m done with SOCRMx. In spite of my disengagement with the community, the resources and the structure really have been of a high standard and, more importantly, incredibly timely for me. As someone returning to study after some time who has not ever really had a formal research focus, there seems to be a lot of assumed knowledge about research methodology and having this opportunity to get a birds-eye view of the various options was ideal. I know I still have a long way to go but this has been a nice push in the right direction.

 

SOCRMx Week #7: Qualitative analysis

I’m nearly at the end of Week #8 in the Social Research Methods MOOC and while I’m still finding it informative, I’ve kind of stopped caring. The lack of community and particularly of engagement from the teachers has really sucked the joy out of this one for me. If the content wasn’t highly relevant, I’d have left long ago. And I’ll admit, I haven’t been posting the wonderfully detailed and thoughtful kind of posts on the forum or in the assigned work that they other 5 or so active participants have been doing but I’ve been contributing in a way that supports my own learning. I suspect the issue is that this is being run as a formal unit in a degree program and I’m not one of those students. Maybe it’s that I chose not to fork over the money for a verified certificate. Either way, it’s been an unwelcoming experience overall. When I compare it to the MITx MOOC I did a couple of years ago on Implementing Education Technology, it’s chalk and cheese. Maybe it’s a question of having a critical mass of active participants, who knows. But as I say, at least the content has been exactly what I’ve needed at this juncture of my journey in learning to be a researcher.

This week the focus was on Qualitative Analysis, which is where I suspect I’ll being spending a good amount of my time in the future. One of my interesting realisations early on in this though was that I’ve already tried to ‘cross the streams’ of qual and quant analysis this year when I had my first attempt at conducting a thematic analysis of job ads for edvisors. I was trying to identify specific practices and tie them to particular job titles in an attempt to clarify what these roles were largely seen to be doing. So there was coding because clearly not every ad was going to say research, some might say ‘stay abreast of current and emerging trends’ and other might ask the edvisor to ‘evaluate current platforms’. Whether or not that sat in “research” perfectly is a matter for discussion but I guess that’s a plus of the fuzzy nature of qualitative data, where data is more free to be about the vibe.

But then I somehow ended up applying numbers to the practices as they sat in the job ad more holistically, in an attempt to place them on a spectrum between pedagogical (1) and technological (10). Which kind of worked in that it gave me some richer data that I could use to plot the roles on a scattergraph but I wouldn’t be confident that this methodology would stand up to great scrutiny yet. Now maybe just because I was using numbers it doesn’t mean that it was quantitative but it still feels like some kind of weird fusion of the two. And I’m sure that I’ll find any number of examples of this in practice but I haven’t seen much of this so far. I guess it was mainly nice to be able to put a name to what I’d done. To be honest, as I was initially doing it, I assumed that there was probably a name for what I was doing and appropriate academic language surrounding it, I just didn’t happen to know what that was.

I mentioned earlier that qualitative analysis can be somewhat ‘fuzzier’ than quantitative and there was a significant chunk of discussion at the beginning of this week’s resources about that. Overall I got the feeling that there was a degree of defensiveness, with the main issue being that the language and ideas used in quantitative research are far more positivist in nature – epistemologically speaking (I totally just added that because I like that I know this now) – and are perhaps easier to justify and use to validate the data. You get cold hard figures and if you did this the right way, someone else should be able to do exactly the same thing.

An attempt to map some of those quantitative qualities to the qualitative domain was somewhat poo-pooed because it was seen as missing the added nuance present in qualitative research or something – it was a little unclear really but I guess I’ll need to learn to at least talk the talk. It partly felt like tribalism or a turf war but I’m sure that there’s more to it than that.  I guess it’s grounded in a fairly profoundly different way of seeing the world and particularly of seeing ‘knowing’. On the one side we have a pretty straight forward set of questions dealing with objective measurable reality and on the other we have people digging into perspectives and perceptions of that reality and questioning whether we can ever know or say if any of them are absolutely right.

Long story short, there’s probably much more contextualisation/framing involved in the way you analyse qual data and how you share the story that you think it tells. Your own perceptions and how they may have shaped this story also play a far more substantial part. The processes that you undertook – including member checking, asking your subject to evaluate your analysis of their interview/etc to ensure that your take reflects theirs – also play a significant role in making your work defensible.

The section on coding seemed particular relevant so I’ll quote that directly:

Codes, in qualitative data analysis, are tags that are applied to sections of data. Often done using qualitative data analysis software such as Nvivo or Dedoose.

Codes can overlap, and a section of an interview transcript (for example) can be labeled with more than one code. A code is usually a keyword or words that represent the content of the section in some way: a concept, an emotion, a type of language use (like a metaphor), a theme.

Coding is always, inevitably, an interpretive process, and the researcher has to decide what is relevant, what constitutes a theme and how it connects to relevant ideas or theories, and discuss their implications.

Here’s an example provided by Jen Ross, of a list of codes for a project of hers about online reflective practice in higher education. These codes all relate to the idea of reflection as “discipline” – a core idea in the research:

  • academic discourse
  • developing boundaries
  • ensuring standards
  • flexibility
  • habit
  • how professionals practice
  • institutional factors
  • self assessment

Jen says: These codes, like many in qualitative projects, emerged and were refined during the process of reading the data closely. However, as the codes emerged, I also used the theoretical concepts I was working with to organise and categorise them. The overall theme of “discipline”, therefore, came from a combination of the data and the theory.

https://courses.edx.org/courses/course-v1:EdinburghX+SOCRMx+3T2017/courseware/f41baffef9c14ff488165814baeffdbb/23bec3f689e24100964f23aa3ca6ee03/?child=last

I already mentioned that I undertake thematic analysis of a range of job ads, which could be considered to be “across case” coding. This is in comparison to “within-case” coding, where one undertakes narrative analysis by digging down into one particular resource or story. This involves “tagging each part of the narrative to show how it unfolds, or coding certain kinds of language use” while thematic analysis is about coding common elements that emerge while looking at many things. In the practical exercise – I didn’t do it because time is getting away from me but I read the blog posts of those who did – a repeated observation was that in this thematic analysis, they would often create/discover a new code half way through and then have to go back to the start to see if and where that appear in the preceding resources.

On a side note, the practical activity did look quite interesting, it involved looking over a collection of hypothetical future reflections from school leavers in the UK in the late 1970s. They were asked to write a brief story from the perspective of them 40 years in the future, on the cusp of retirement, describing the life they had lived. Purely as a snapshot into the past, it is really worth a look for a revealing exploration of how some people saw life and success back in the day.Most of the stories are only a paragraph or two.

https://discover.ukdataservice.ac.uk/QualiBank/?f=CollectionTitle_School%20Leavers%20Study

And once again, there were a bunch of useful looking resources for further reading about qualitative analysis

  • Baptiste, I. (2001). Qualitative Data Analysis: Common Phases, Strategic Differences. Forum: Qualitative Social Research, 2/3. http://www.qualitative-research.net/index.php/fqs/article/view/917/2002
  • Markham, A. (2017). Reflexivity for interpretive researchers http://annettemarkham.com/2017/02/reflexivity-for-interpretive-researchers/
  • ModU (2016). How to Know You Are Coding Correctly: Qualitative Research Methods. Duke University’s Social Science Research Unit. https://www.youtube.com/watch?v=iL7Ww5kpnIM
  • Riessman, C.K. (2008). ‘Thematic Analysis’ [Chapter 3 preview] in Narrative Methods for the Human Sciences. SAGE Publishing https://uk.sagepub.com/en-gb/eur/narrative-methods-for-the-human-sciences/book226139#preview Sage Research Methods Database
  • Sandelowski, M. and Barroso, J. (2002). Reading Qualitative Studies. International Journal of Qualitative Methods, 1/1. https://journals.library.ualberta.ca/ijqm/index.php/IJQM/article/view/4615
  • Samsi, K. (2012). Critical appraisal of qualitative research. Kings College London. https://www.kcl.ac.uk/sspp/policy-institute/scwru/pubs/2012/conf/samsi26jul12.pdf
  • Taylor, C and Gibbs, G R (2010) How and what to code. Online QDA Web Site, http://onlineqda.hud.ac.uk/Intro_QDA/how_what_to_code.php
  • Trochim, W. (2006). Qualitative Validity. https://www.socialresearchmethods.net/kb/qualval.php

Week #6 SOCRMx – Quantitative analysis

This section of the SOCRMx MOOC offers a fair introduction to statistics and the analysis of quantitative date. At least, enough to get a grasp on what is needed to get meaningful data and what it looks like when statistics are misused or misrepresented. (This bit in particular should be a core unit in the mandatory media and information literacy training that everyone has to take in my imaginary ideal world)

The more I think about my research, the more likely I think it is to be primarily qualitative but I can still see the value in proper methodology for processing the quant data that will help to contextualise the rest. I took some scattered notes that I’ll leave here to refer back to down the road.

Good books to consider – Charles Wheelan: Naked Statistics: Stripping the dread from data (2014) & Daniel Levitin: A Field Guide to Lies and Statistics: A Neuroscientist on How to Make Sense of a Complex World (2016)

Mean / Median / Mode

Mean – straightforward average.

Median – put all the results in a line and choose the one in the middle. (Better for average incomes as high-earners distort the figures)

Mode – which section has the most hits in it

Student’s T-Test – a method for interpreting what can be extrapolated from a small sample of data. It is the primary way to understand the likely error of an estimate depending on your sample size

It is the source of the concept of “statistical significance.”

A P-value is a probability. It is a measure of summarizing the incompatibility between a particular set of data and a proposed model for the data (the null hypothesis). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5366529/

“a significance level is an indication of the probability of an observed result occurring by chance under the null hypothesis; so the more you repeat an experiment, the higher the probability you will see a statistically significant result.”

Overall this entire domain is one where I think I’m only really going to appreciate the core concepts when I have a specific need for it. The idea of a distribution curve where the mean of all data points represents the high point and standard deviations (determined by a formula) show us the majority of the other data points seems potentially useful but, again, until I can practically apply it to a problem, just tantalisingly beyond my grasp.

Week #5: SOCRMx – moving into analysis

Maybe I simply don’t have enough experience in this area but I have to say that I’m struggling at the moment. I’m still pushing through the MOOC – alongside probably 3 or 4 other people still responding to the activities and posting in the discussion forum – but the lecturers seem to have gone MIA. There is no feedback from them on anything and I think that the rest of the people participating are mainly here because it’s a formal course-credit unit that they are undertaking.

(This is why their posts are so much better written and more deeply considered than mine but that’s ok)

There was a nice discussion of how data gets filtered early on though that I’ll quote:

Hardy and Bryman (2004) argue that some key dimensions of analysis apply across qualitative/quantitative approaches (pp.4-12) – including a focus on answering research questions and relating analysis to the literature; and a commitment to avoiding deliberate distortion, and being transparent about how findings were arrived at. They also discuss data reduction as a core element of analysis:

“to analyze or to provide an analysis will always involve a notion of reducing the amount of data we have collected so that capsule statements about the data can be provided.” (p.4)

So we’re starting to tap into the analysis side of things and have been asked to re-read the papers examined last week with an eye for how they approached analysis. The first is qual and the second is quant.

For what it’s worth, these are my responses.

Questions for discussion:

Why do you think Paddock chose narratives as a way of conveying the main themes in her research?

The research is about lived experiences – “a case study research strategy suits the imperative to explore the dynamic relationships between these sites”

What is the impact for you of the way the interview talk is presented? What is the point of the researcher noting points of laughter, for example? What about filler sounds like ‘erm’?

Helps to convey the voice of the subject and humanise them.

How does Paddock go about building a case for the interpretations she is making? How does she compel you, as a reader, to take her findings seriously? Share a specific example of how you think this is done in this article.

Ties it to theoretical concepts. They’re very uncritical about that sort of things I’m criticising in terms of the consumerist culture, cheap food, not worrying about where the stuff comes from how far it’s come or how it’s produced – is linked directly to Bourdieu’s Cultural Capital.

Interviewees use many emotive words in the excerpts presented here, but Paddock has focused in on the use of the word ‘disgusting’, and developed this through her analysis. How does this concept help her link the data with her theoretical perspective?

Used to differentiate class values

Paddock’s main argument is that food is an expression of social class. Looking just at the interview excerpts presented here, what other ideas or research questions do you think a researcher could explore?

Education, privilege, consumer culture

 

Overall I struggled with this paper because the author didn’t explicitly describe her analysis process in the paper. She just seemed to dive in to discussing the findings and how the quotes tied in to the theory.

Paper 2: Kan, M-Y., Laurie, H. 2016. Who Is Doing the Housework in Multicultural Britain? Sociology. Available: https://doi.org/10.1177/0038038516674674

 

The researchers here conducted secondary analysis of an existing dataset (the UK Household Longitudinal Study https://www.understandingsociety.ac.uk . What are some advantages and disadvantages of secondary analysis for exploring this topic? (hint: there are some noted at various points in the paper)

Advantages – Practicality, addressing issues not previously covered by the original researchers,

Disadvantages – data hasn’t been collected to respond specifically to the research questions,

How does the concept of intersectionality allow the researchers to build on previous research in this area?

Offers a new lens to examine relationships in the data

 

Choose a term you aren’t familiar with from the Analysis Approach section of the article on page 8 and do some reading online to find out more about what it means (for example: cross-sectional analysis;multivariate OLS regressions; interaction effects). Can you learn enough about this to explain it in the discussion forum? (if you are already very familiar with statistical analysis, take an opportunity to comment on some other participants’ definitions)

A cross-sectional analysis explores a broad selection of subjects at a certain point in time while a longitudinal study takes place over a significantly longer period.

How do Kan and Laurie go about building a case for the interpretations they are making? How do they compel you, as a reader, to take their findings seriously? Share a specific example of how you think this is done in this article.

I was concerned that correlation was tied too much to causation. In explaining some of the possible reasons for differences by ethnicity, broad claims were made about the nature of entire cultures that – while perhaps reflective of the quant data – seemed to have no other supporting evidence beyond assertion.

Week #3 SOCRMx – Discourse Analysis

When I first stumbled across Foucault in some paper since cast to the depths of my mind, my immediate response was that it was wanky and unhelpful theoretical tosh. I’ll admit that I struggled to get my head around it but my broad takeaway was that it sat too far in the whole post-modern create your own reality school that has since brought us ‘fake news’ and Donald Trump.

Imagine my surprise then as I worked through the resources relating to Discourse Analysis – and particular five different theoretical approaches to doing it – only to find the Foucauldian Discourse Analysis might in fact be the closest thing to what I need in exploring the language used around Edvisors to see if and how it shapes their status and identity in tertiary education institutions. The other option is Critical Discourse Analysis, which kind of works in the same way but seems slightly angrier about it. Maybe not angrier but you seem to need to start from the position that there is an existing problem (which there probably is) and then dig into what you’re going to do about it. Both are on the table for now anyway.

The great news is that from what I knew of this a week ago – that it existed and a couple of people had mentioned that it sounded like what I wanted to do – I now think that can see why and how it might be valuable. Not that I know how to do it yet but that will come with time.

So once again the EdinburghX SOCRMx MOOC is coming through for me. I had hoped to have explored 2-3 additional topics by now but came down horribly sick late last week and am barely just functional again now.

For what it’s worth, here are my other scratch notes on Discourse Analysis taken from the course so far:

Qualitative approach to the study of language in use – spoken or text.

Covers diverse sources from interviews/focus groups to secondary material such as archival material, policy documents, social media and so on.

Various ways of doing it from the micro (sentence by sentence) to the macro (overall impact of how language is used) depending on the theoretical framework chosen.

References: Discourse – David Howarth and Analysing Discourse – Norman Fairclough (more practical)

Common criticisms of DA – it’s idealist (the world is just a product of our minds) and relativist (anything goes). Also that Discourse Analysts confuse changing the way that we talk about a thing with actually changing the thing itself. Maybe, maybe not.

“Critical discourse analysis is actually really interested in the ways in which systems of representation have actual material effects and asymmetrical effects on the distribution of burdens and benefits on particular social groups, access to resources and so on and so forth” (MOOC video introduction)

There are many different types of discourse analysis, including conversation analysis, which analyses talk in detail (see Charles Antaki’s excellent web site for a good introduction to conversation analysis), and critical discourse analysis, which pays particular attention to how relations of power and domination are enacted through discourse. “”

An important aspect of discourse analysis, for our purposes, is that it treats language as action. As Gee puts it, language “allows us to do things and be things… saying things in language never goes without also doing things and being things” (Gee, 2011, p.1). It also places importance on context: “to understand anything fully you need to know who is saying it and what the person saying it is trying to do” (ibid, p.2).

Not Conversational Analysis for my work

Critical Discourse Analysis – about power relationships and social issues. Almost seems too loaded? Documents that seek to present particular political positions

Foucauldian Discourse Analysis might be relevant – how language shapes identity

There was also an assignment for us to try it out with. One of my major interests is job advertisements, which is perhaps not the best place to start given how formalised the structures of these things are but I did it all the same. Outlaw Country!

This is the sample text:

*This is a new open-ended, part-time (0.5 FTE) post in the E-Learning Development Team, which has been created to support the development of the University’s online distance learning provision. The role holder will provide application management support to academic programme teams for the delivery of fully online courses. In the performance of these duties the role holder will coordinate the registration of courses, students and staff on the University’s Canvas learning management system (LMS).

The post will provide first-line user support to staff and second-line support to students, responding to queries on the Canvas LMS. The post requires a combination of good technological skills, awareness of course and user administration processes and expertise in delivering training and support services. Creative approaches to problem solving and the ability to learn and apply new skills quickly will be necessary, as well as good organisational skills, excellent interpersonal skills and above all, a strong commitment to customer service.

The role forms part of a small team working to the highest standards and best practices for online learning. You will be expected to work on your own initiative, leading staff training and user support services, as well as working effectively within a team.*

These are my responses.

1.Significance: The nature of the text is highly specific and directive. The requirements expected of the reader are made explicit with the use of terms like “The post requires”, “will be necessary” and “ you will be expected”. As a job advertisement this is fairly standard language. The use of “and above all” gives extra weighting to the need for a “strong commitment to customer service”

2. Practices: This text is being used to describe a recruitment process

3.Identities: This text describes in detail the characteristics that the (suitable) reader should possess and explicitly states their relationships with other people and groups described. This positions the writer very much as the person holding the power

4.Relationships: The text defines the relationship between the reader (if successful) and stakeholders in the university and also the relationship between the reader and writer (employee/employer)

5. Politics: The nature of a job advertisement is to describe ‘how things should be’. It broadly pushes a line that the institution cares about quality teaching and learning and also quality customer support.

6. Connections: Everything is relevant to everything else in this piece of text because it has a singular focus on the specific goal of recruiting the right person.

7. Sign systems and knowledge: Some of the language used assumes that a certain type of knowledge relating to technology enhanced learning is possessed by the reader. It is heavily factual and not supportive of different interpretations of what is written.

 

I don’t know if I’m ‘doing it right’ particularly but it did make me think a little more about the nature of the power relationships expressed in job ads and the claims that they make to reflect an absolute truth in reality. So that seems like a thing.

I haven’t taken a look at the discussion posts for the other topics but the fact that there are only 3 other posts about Discourse Analysis in this MOOC after 3 weeks makes me wonder whether it’s simply a topic that people are engaging with or whether people aren’t really engaging with the MOOC overall. Hopefully it’s the former, because I’m getting a lot out of this.