6th February 2013

New words: 8250
Total words:
47465

I’ve now completed the first phase of the phenomenographic analysis, the goal of which was to uncover the conceptions of Wythenshawe present in the interview data. These conceptions of Wythenshawe constitute the first of three goals of the intensive phase of the research which are, firstly, to discover the conceptions of Wythenshawe present in the interviews, secondly, to discover the conceptions of aspirations present in the interviews, and finally, to explore the relationship between conceptions of Wythenshawe and conceptions of aspirations.

My analysis produced four conceptions of Wythenshawe which together provide a typology of the ways in which the young people I interviewed understood and spoke about their area. As is the norm in phenomenographic analysis, I have developed these conceptions by outlining both their referential and structural aspects – how the young people talk about Wythenshawe, and what they refer to when they talk about the area:

The dysfunctional conception of Wythenshawe
Referential aspects: critical affective orientations towards Wythenshawe
Structural aspects: local facilities, policing, crime, unemployment

The territorial conception of Wythenshawe
Referential aspects: defensive and familiar orientations towards Wythenshawe
Structural aspects: people

The provisional conception of Wythenshawe
Referential aspects: comparative perspective on Wythenshawe, a desire to move away
Structural aspects: limited local facilities and jobs

The material conception of Wythenshawe
Referential aspects: positive affective orientations towards Wythenshawe
Structural aspects: green spaces, facilities

As with any typology, individual cases never map perfectly onto any given type – such is the case with any reductive process. In my thesis, as well as outlining each conception of Wythenshawe in more detail, I’ve also explored the ‘individual conceptions’ of two young people whose talk about Wythenshawe didn’t map easily onto any of the four conceptions in my typology. I’ll also put together some mini case studies of individual young people, to elaborate both how they align with a particular conception in the typology and constitute their own individual conception, which may draw on other conceptions. I can do this without undermining the value of my typology, which provides a useful descriptive and analytical tool when it comes to concluding how young people from Wythenshawe conceive of their area, and how these conceptions shape their aspirations. By outlining individual case studies alongside my typology, I avoid ‘losing the individuals’ in my phenomenographic analysis.

My next task will be to draw out the conceptions of aspirations present in the interviews. So far, two prominent distinguishing themes seem to be money vs. enjoyment, and structure vs. agency.

To recap how these current activities fit into my overall project, here’s an overview of my research design:

Intensive phase
Data: 17 semi-structured interviews with young people in Wythenshawe
Analysis: Qualitative, phenomenographic
Aim: Explore the relationship between ‘place’ and aspirations. In other words, the relationship between the ways in which young people understand and talk about their local area and the way they understand and talk about their occupational aspirations

Extensive phase
Data: British Household Panel Survey Youth Questionnaire
Analysis: Quantitative, multilevel
Aim: Explore the relationship between ‘space’ and aspirations. In other words, the relationship between area-level deprivation and geodemographic typologies and the content of young people’s occupational aspirations

4th December 2012

I’ve now produced individual treemaps for each interview, showing the most frequently-used keywords and keyword groups for individual participants when talking about Wythenshawe. These treemaps provide a snapshot summary of how ‘Wythenshawe’ is understood and spoken about in each interview. Meanwhile, the treemap I produced last week (see 30th November) provides a summary of how Wythenshawe is understood and spoken about on aggregate, in the entire dataset.

To help me construct conceptions of Wythenshawe from the data – the final stage in the analysis – I’ve produced an interview/keyword matrix which presents the data in a ‘meso’ form, somewhere between the treemaps which summarise individual interviews and the treemap which summarises the entire dataset:

The matrix takes as its columns the major keyword groups – those that have the highest frequencies in the data – and summarises each participant’s conception of Wythenshawe in a row, by stating the most-used keyword within each keyword group, for each interview. As an initial overall theme, just to help try and identify patterns, red cells contain critical views of Wythenshawe, green cells contains positive views and orange cells are neutral.

The matrix has allowed me to spot a couple of initial associations between keywords:

  • Interviews which express positive affective orientations towards Wythenshawe are those which refer most frequently to ‘family’ when talking about ‘people’
  • Interviews which expressed sentiments of ‘attachment’ when talking about moving away from Wythenshawe were those which had a ‘positive’ code as their most frequent keyword

30th November 2012

Having made all the adjustments to my coding scheme, I’ve now gone back and verified the coding of each of the 193 clips in my dataset. This proved to be a really worthwhile final check, as around a third of my clips needed some recoding – partly because of the changes to coding scheme, but mainly because I now know the data better and also understand the logic of my coding scheme as a system. Having verified the coding of my clips, I made one final pass of the collection to check my coding. On this final pass, I only made a handful of changes to the coding which indicates that I’m happy with the way my coding scheme is working and the way I’ve coded individual clips in line with this scheme.

A treemap of the final coding scheme, and the frequency of codes in the data, is here.

I’ve also produced treemaps of the frequencies of codes for each interview. These are a nice way of getting an overall, at-a-glance impression of each interview, and may help me as I start to look for my 2-6 conceptions of Wythenshawe in the data. As I mentioned before, this process of identifying conceptions will also be guided by an analysis of how different codes overlap, using the search tools in Transana.

26th November 2012

I’ve applied the changes to my coding scheme outlined in my previous posts, leaving me with 50 keywords (down from 71 after the first pass) across 13 keyword groups (down from 14 after the first pass). I’ve visualised the data, this time in a treemap which groups keywords into keyword groups. Click here for the detail.

The advantage of the treemap is that it doesn’t just show which keywords/codes appear most frequently in the data. It also shows the relative frequency of different keyword groups and which keywords are most common within those groups. This is handy for comparing the frequency of keywords that are theoretically related but have quite different frequencies in the data. For instance, the disparity between the prevalence of the comparative orientation ‘other places better than Wythenshawe’ and ‘Wythenshawe better than other places’, and between the desire to move away from the area and a sense of attachment, are more clearly represented.

To see the ranked frequencies of different codes irrespective of their keyword groups, you can just reorder Code and Code group in the Treemap Hierarchy control at the top of the visualisation. My next task, now that I’ve overhauled the coding scheme quite considerably, is to check the individual codings of each of the 194 clips. Once that’s complete, I can start to construct my 2-6 conceptualisations of Wythenshawe which will involve considering not only the frequencies of different codes/code groups but also the extent to which different codes overlap in clips. This is something I should be able to do using the ‘Search’ and ‘Save as collection’ tools in Transana.

22nd November 2012

Today I decided on the following additional modifications to the coding scheme:

  • ‘would like to find work in wythenshawe’, which had a frequency of 2, will be merged with the code ‘shaped explicitly’ to form the new code ‘shaped by area’. The code ‘aspirations would be different elsewhere’ will also be merged with the new code ‘shaped by area’.
  • Codes in the keyword group ‘local jobs’ will be coded into major SOC2000 groups as follows:
    • ‘shops/retail’, ‘airport’ and ‘Civic’ will not be recoded, partly because they’re the three most frequently occurring codes in the ‘local jobs’ keyword group and partly because there’s insufficient information to attach them to a particular SOC2000 group
    • ‘bin men’, ‘cleaners’, ‘factories’ and ‘security’ → ‘elementary’
    • ‘call centres’ → ‘sales/customer service’
    • ‘garages’ and ‘building/construction’ → ‘skilled trades’
    • ‘nurses/doctors’ and ‘schools/teaching’ → ‘professional’
    • ‘office work’‘administrative and secretarial’
    • ‘old people’s home’‘personal services’
    • ‘police’‘associate professional and technical’
    • ‘Hospital’ and ‘self employed’ will not be recoded because there’s insufficient information to attach them to a particular SOC2000 group
  • ‘good to get away’ will be merged with the new keyword ‘move’
  • ‘wyth YP bad at school’ will be merged with ‘young people problematic’. The keyword ‘good schools’ and the keyword group ‘schools’ will be deleted.
  • The keyword ‘buoyant’ will be renamed good local job opportunities’
  • The keyword ‘money’s tight’ will be renamed ‘material hardship’ and will be relocated in the keyword group ‘descriptive’ which will be renamed ‘misc. descriptive’
  • The keyword ‘where to?’ in the keyword group’moving away’ will be deleted

The final keyword group to review is ‘people’ before I apply the modifications to the coding scheme and see what the resulting data structure looks like.

21st November 2012

So far, I’ve decided on the following refinements to the coding scheme. I’ve also been experimenting with the ways in which Transana’s search function will help me to apply these transformations in the most efficient way possible.

  • All 7 utterances coded as ‘wythenshawe is not unique/different’, bar one, were found to be expressing a defensive attitude towards Wythenshawe’s reputation. A code for ‘defensive attitude towards wythenshawe’s reputation’ already exists, with 15 occurrences in the data. Merging the two codes produces 21 utterances in the data expressing a defensive attitude towards Wythenshawe’s reputation.
  • Closer analysis of the 13 clips comparing Wythenshawe ‘with London’ or ‘with Manchester’ revealed that 8 of them see these cities as favourable when compared with Wythenshawe. Although not as explicit as the 13 utterances coded as ‘other places better than wythenshawe’, these 8 clips do express an understanding of Wythenshawe as somehow inferior, less desirable, less exciting or less safe than other places they know. These 8 clips will therefore be recoded as ‘other places better than wythenshawe’, increasing the frequency of this code in the data from 13 to 21.
  • ‘know the area’ and ‘know the people round here’ will be combined into a new code ‘familiarity’. ‘wythenshawe as home’ and ‘wythenshawe as totality’ were also merged with this new ‘familiarity’ code.
  • ‘would definitely move’ and ‘would like to move’ will be combined into a new code ‘would like to move’
  • ‘would like to come back’ and ‘would like to stay’ will be combined into a new code ‘attached’
  • ‘wyth YP bad at school’ will be combined with ‘YP problematic’
  • ‘quiet/boring’ and ‘bad/horrible’ will be combined into a new code’critical’
As I outlined in my last post, I have to be critical about making transformations which enhance rather than degrade the utility of my data. This means reducing the data, or chunking clips together, in a way that allows the extent of a theme to be fully appreciated rather than being spread thin between several keywords. At the same time, though, I don’t want to merge keywords which identify important or interesting differences in meaning, even if these are subtle. For instance, I’ve decided not to merge ‘other places better than wythenshawe’ with ‘critical’, or to merge ‘wythenshawe nicer than other places’ with ‘good place’ because there’s a potentially interesting disparity in the data. The keyword group ‘affective orientations’ which identifies how young people describe Wythenshawe in its own right, shows an even split (after I apply the transformations above) between ‘critical’, ‘balanced’ and ‘positive’ views of Wythenshawe – each with a frequency of 10 in the data. However, the keyword group ‘comparisons’ which identifies how young people talk about Wythenshawe in relation to other places, shows a clear skew (after the transformations above) towards ‘other places better than wythenshawe’, with a frequency of 21. ‘wythenshawe nicer than other places’ has a frequency of only 1. So it seems that young people tend to be critical about Wythenshawe when they’re comparing it to other places, but when considering the place in its own right they’re more positive. This is a potentially interesting aspect of the data which would be lost if I merged the positive/negative codes in the comparative/affective orientations keyword groups.
Tomorrow I’ll be considering whether, and how, the coding scheme can be reduced further and sometime soon I’ll want to apply the transformations and visualise the coding scheme again to see if there are any stronger themes emerging. I’m looking at alternative visualisation tools which allow me to display codes in their keyword/keyword group hierarchy.
Something I have in mind for starting the process of producing different ‘conceptions’ (once I’ve finished refining the coding scheme and have been back through each of the 194 clips to check their individual coding) is to test the overlap between different keywords. This can’t be done directly in Transana, but the search function makes it easy because it allows you to save all clips identified by a search as a new collection. Testing for overlap will involve searching for all clips with a certain keyword, or group of keywords, assigned, saving this as a new collection and then running a Collection Report to see how frequently another given keyword appears in that collection. It’s a simple way of testing the overlap between two sets, similar to the measure of ‘consistency’ in Qualitative Comparative Analysis.

20th November 2012

I’ve now finished the first pass of the data, extracting all utterances relating to Wythenshawe from the interviews. In total, my ‘Wythenshawe’ collection in Transana contains 194 clips, coded according to my exploratory coding scheme which contains 14 keyword groups and 71 keywords. After the first pass, the most frequently-applied codes (and the frequency of their occurrence) are:

  • Crime (28)
  • Local facilities (27)
  • Limited local job opportunities (24)
  • Green spaces (15)
  • A defensive attitude towards Wythenshawe’s reputation (15)
  • Comparisons with other places that are nicer than Wythenshawe (13)
  • Local jobs in shops/retail (13)
  • Local underinvestment/decay (11)
  • A definite plan to move (11)
  • A desire to move (11)
  • Local young people are a problem (11)

The entire coding scheme is summarised in the chart below and the details are here.

The next stage of the analysis involves two basic processes: refining the coding scheme and refining the way individual clips are coded. This will take me away from the raw interview data – something I won’t revisit until I come to compile my collection on utterances relating to aspirations. Taking the 194 clips I have in my ‘Wythenshawe’ collection, I’ll be looking to:

  • Merge codes that identify the same meaning in the data (for instance, I may decide to merge ‘a definite plan to move’ and ‘a desire to move’)
  • Examine codes with only one or two clips assigned to them and see whether these clips can effectively be assigned to another, more frequently used code
  • Recode clips that could be described more effectively by a code developed later in the coding process (most likely for interviews I coded early in the analysis)
  • Delete any empty codes that remain after these three processes are done (codes that no longer have any clips assigned to them)

The main aim is to slim down the coding scheme, without getting rid of codes that do an effective job at capturing something in the data that isn’t captured by another code in the scheme – even if it’s tempting to do so because the code may only have one or two clips assigned to it. It’ll involve me being critical about why a code might have a low frequency in the data. If it’s an artifact of the data itself, i.e. the code is picking something out in the data that’s ‘real’ but just happens not to occur very often, then the code should stay. If a code’s low frequency is an artifact of my coding, i.e. it’s a code I used early on but then decided was redundant because its meaning was captured by a new code, the code should go.

There are quite a few redundant codes in my scheme, which is a result of the organic and exploratory way I’ve developed that scheme. My decisions about what codes to assign to particular clips, and which codes I needed to capture different aspects of the data effectively, were interdependent. The coding scheme, in whatever form it existed at the point I was coding a particular clip, influenced my interpretation of that clip: “how can I fit this clip to the existing codes in front of me?”. But at the same time, each clip I coded influenced my interpretation of the coding scheme: “(how) do I need to develop the coding scheme in order to be able to capture everything in this clip?”. It’s an interesting dialectic to work with, but it means that coding clip 194 involved quite a different thought process to coding clip 1 – which is why I now need to revisit each clip in turn, in order to make sure I’m happy with the way individual clips are coded and I’m happy with the state of the coding scheme.

8th November 2012

I’ve been through 6 of the 16 interviews on the first pass, and have so far extracted 82 clips – items of interview data – that in some way refer to Wythenshawe. At this rate, I should end up with around 225 clips at the end of the first pass of the analysis, from which I’ll construct my collections. Although working with Standard Clips, as I’m doing, is more focused on grouping/categorising clips than coding individual clips, I’ve been coding clips as I go along to help me out when I come to start compiling my categories on the next pass of the data. These ‘categories’ I begin to put together on the next pass will be different collections of clips, each representing a different ‘conception’ of Wythenshawe present in the data.

Out of interest, Transana can produce a summary report of the contents of a collection. So, for the 82 clips I have so far in my ‘Wythenshawe’ collection, I can see that some of the most common themes are:

  • A defensive attitude towards Wythenshawe’s reputation (12 occurrences)
  • Talk of crime in the local area (10 occurrences)
  • A definite desire to move away (8 occurrences)
  • Wythenshawe has a mix of ‘good’ and ‘bad’ people (7 occurrences)
  • Wythenshawe’s facilities (7 occurrences)
  • Wythenshawe’s green spaces (7 occurrences)

A theme may derive its frequency from multiple occurrences in a single interview, or a single occurrence in a number of interviews, so it’s quite interesting/difficult interpreting these counts. At this stage, they’re not playing any role in the analysis so I can get away with interpreting them simply as a broad indicator of the sorts of themes that will probably play the most instructive role when it comes to compiling my conceptions in the outcome space.

As I get more accustomed to the way data becomes structured in Transana, I’ve got thinking about what this data represents, how I’ll be able to manipulate it and what this means, substantively, for my findings. One important observation is that ultimately, when I come to construct my conceptions of Wythenshawe in the second, third and fourth passes of the data, I won’t be discriminating according to interviewee. I’ll be working solely on the basis of clips, regardless of which interviews these clips come from. So data from an interview may be used to construct multiple different conceptions. This makes theoretical sense, as people have a number of cross-cutting, overlapping and, indeed, sometimes apparently contradictory orientations to their local area. I won’t be constructing a set of 2-6 conceptions and then mapping each interview onto one conception, such as “Aaran has conception A”, “Bradley has conception B” and “Cameron has conception C”. It might be that an interviewee clearly aligns with only one conception, but this won’t necessarily be the case. I need to refer back to the literature on phenomenographic analysis to see if this is the norm, or if it’s more normal to map participants to a single conception. If the latter, I think this is quite naive and I’d be prepared to argue against it.

6th November 2012

All the interviews are now transcribed and timecoded – data preparation is complete. I’m now underway with the first analytical pass of the data. Analysis of AV data in Transana basically involves three steps:

  1. Identifying ‘clips’ (short sections of media data with accompanying transcript) within ‘episodes’ (the media and transcript for an entire interview) based on some broad analytical criterion. In my case, I’m looking for clips that say anything about how young people understand Wythenshawe, because my ultimate goal with this intensive phase of the research is to produce a set of such ‘ways of understanding’, or ‘conceptions’;
  2. making a decision about which properties that clip has (which concepts it invokes, which aspects of reality it picks out, its affective tone and so on) based on the phenomena or analytical constructs that are relevant to my research;
  3. either coding that clip by attaching keywords to it (for example, ‘area reputation’, ‘no jobs’, ‘green spaces’) or placing it within a ‘collection’ of clips (essentially just a folder), where the collection has its own description/definition which is shared by the clips placed within it. In Transana, the first approach to analysis works with Quick Clips and the second approach works with Standard Clips.

Choosing whether to work with Quick Clips or Standard Clips involves making a choice about the particular approach I want to take to qualitative analysis. While manipulating my data using Quick Clips centres around coding individual items of interview data which are then organised according to their codes, or keywords, using Standard Clips centres around grouping similar clips together because they share some higher-level property, and doesn’t necessarily involve assigning codes to individual clips. Working with Quick Clips is initially quicker, and back-loads most of the analytical work because a Quick Clip can be created just by specifying a single keyword to attach to  a section of interview. Working with Standard clips is slower, front-loading the analytical work because it involves giving a clip a definition/description and choosing which collection it should belong to (and, optionally, also assigning keywords to the clip).

Towards the end of this screencast in the Transana documentation, it’s clear that the differences between Standard Clips and Quick Clips are far less significant than their similarities. For instance, Standard Clips can be coded using keywords, and Quick Clips can be grouped together into collections, making them functionally indistinguishable. But it’s an interesting distinction because it involves approaching and working with qualitative data in subtly different ways. In my mind, Standard Clips encourage a more holistic view of a datum as being part of a dataset and relying for its definition, at least in part, on the way in which other data within the dataset are themselves defined, whereas Quick Clips encourages a more atomistic view of data. Given that my ultimate goal is to produce 2-6 categories or sets of clips, which will represent conceptions of Wythenshawe in the phenomenographic outcome space, it seems fairly clear to me that I want to work with Standard Clips because my task is essentially one of grouping and categorising. Working with the data in this way will produce, on the first pass, a single collection of clips relating to Wythenshawe (each clip with keywords assigned); on the second pass, a number of collections based on my initial appraisal of how to categorise the variation embodied by the clips and their keywords most effectively; on the third pass a more refined set of collections; on the fourth pass a further refined set of collections, and so on until no more refinements can be made and I have produced between 2 and 6 conceptions of Wythenshawe from the raw data.

The first analytical pass of the data will be exploratory, steered only by a desire to identify utterances in the data that refer to Wythenshawe in any way. I don’t know at this stage which conceptions of (or ‘ways of talking about’) Wythenshawe lie behind the variation in the data. The second and any subsequent passes of the data will be less exploratory, guided not only by the individual properties of the clips in the dataset (as specified by their keywords) but also the  collections which now organise and structure it.