Improving Your Literature Reviews with NVivo 10 for Windows

Improving Your Literature Reviews with NVivo 10 for Windows



today for webinar today on improving your literature reviews with in vivo our guest presenters today are Wendy clutch an associate professor at San Jose State University Michele McKelvey an associate professor at the University of Nebraska Kearney M Kristi wisely an associate professor of practice at the University of Nebraska Lincoln as well as Shelley lund an associate professor at the University of wisconsin-milwaukee welcome everyone hello everyone we're gonna get started with our presentation and he just wanted this close at the beginning that this research was supported by a grant from the US Department of Health and Human Services and it's a field initiated project and you could see the grant number on your screen we also need to make sure that you know that the contents of this presentation do not necessarily represent the policy of the Department of Health and Human Services and you should not assume endorsement by the federal government for anything we're about to say my name is Michele McKelvey and I'll be your first presenter of the day we also need to disclose that we work for the below universities and you receive salaries from them next we would like to thank our research assistants and students that participated in this project Aaron Branson Heather Burke Carmen Daniels Sharon Glaser Peter cow Caitlin Liberty Carly O'Keefe Mallory Thompson and Kayla is earnest to begin this project we want to make sure that you know that we're not promoting any of these softwares that we're going to be talking about that none of us have any financial interest or receive any financial monies from any of the producers of these projects we're simply going to be describing to you how we use different softwares as we were going through our massive scoping review using them for data not only data analysis but organization across the four different universities here's an overview of the presentation we're going to break it down for you by providing some background information about our specific project that realizing that the methods that we use can be used for your own individual projects we're going to introduce the software that we use for managing the tasks of the projects how we manage the data prior to loading it into nvivo for data analysis and highlighting specific features of invivo that we used for data analysis and any other useful tip software's or things that we came up with during the scoping review of this literature so a little bit of background on our own personal project this is a clinical assessment project we sought to investigate with these scoping reviews the assessment practices of speech pathologists performing augmentative and alternative communication assessments across four different kinds of patients we looked at those individuals the populations that we chose have difficulty communicating using international speech so the people that we included in this particular research were individuals with aphasia individuals with autism individuals with ALS and individuals with cerebral palsy and our goal for this project was to gather as much information possible about the assessment process that we used about the assessment processes that were available in the literature and when we started this our goal was to cast a wide net over that literature base and gather up as much as we could and it was a lot and it took us almost over a year to complete so how did we manage all of this data because there was a lot of it as a part of our grants we included in that grants the ability to purchase computers with dual monitors and in vivo site license for each state because or for each site because of the massive amounts of data that what we were looking to process we anticipated that we were going to need something a software in particular that was capable of managing large amounts of data the old Excel spreadsheet and package of colored highlighters wasn't going to be sufficient for this we also had to look at managing this project across multiple sites there were four universities in this and also across multiple researchers and research assistants we discovered enormous task and we needed a lot of help so the tools that we're going to talk about are our way of gathering that information we're going to talk to you about the strengths and weaknesses of some of those products that we use how we organize materials how we kept things centrally located and how we track the project how we tracked the tasks of the project assigning them and completing them and how we went about organizing that information so I'm going to begin with the project management software that we use for document storage we used a couple of them you began with Dropbox however Dropbox is not HIPAA secure and you needed and it wasn't able to hold the amount of articles and citations that we had so we switched to box and this is where we kept all of the project materials it's also HIPAA secure one of our universities have a prescriptive subscription to box and so we were able to log on and use it and take advantage of its additional storage and tracking our task was done through a couple of different softwares read booth in that particular software you can create to-do lists and manage the amount of references and links that we created as a form of data collection we also used read booth too we also used red boots to assign tasks and create task lists we use that software across the course of the project but at some point you found that it didn't mean any needs plenty of the other things that we needed to do was a secure place to store information on interviews and things like that pen file was one of those software's again that our universities had a subscription to that was another secure server that we could use to place information into we started this project about two years ago so some of these software's have added features and make them more conducive unto our tasks and they also many of them have free versions of the software but as your project gets bigger and your data collection increases the amount of data that you're collecting you'll need you'll need additional space and for some of these software's you will need to purchase that the next thing that we created were some procedural documents not only for our screening procedures for how we were going to carry out tasks for screening and reliability but also the coding procedures so we use these we created manuals that not only had that not that not only created documents that were used to create consistency across researchers and sites but I wanted to talk a little bit specifically about the coding documents that we created specifically for invivo when we created those documents we pulled information off the invivo website and created documents with procedures to finish tasks like how to load in an article from the video how to import that or how to code a document using coding stripes that information is available on onion VEVO website and you click those instructions into this document and he also created links that the individuals could click on and go out to demonstration videos or more detailed instructions the purpose of the documents was really to create a central place where everyone that was working on the project could access that information and gather what they needed to carry out their tasks the next thing that you'll see is just an example of a flow chart that we created for how we created for the order of tasks so that everyone knew which things need to be completed after the author's completed screening tasks for the title and the abstracts those articles would then loaded and redacted into those articles were then redacted and loaded into the invivo software so that we could analyze the information within them about read booth here we use this for storing all of the citations that were swept up in the initial screening we one of the different place to store all of these articles and citations because we wanted the articles and citations that were loaded into nvivo to only be those that we actually coded so we created one account we had four different projects within that account so each discipline had each disorder had a project one for cerebal palsey one for autism one for ALS and one for aphasia and each side was responsible for a disordered area the project manager for that particular disorder area then created task lists or to-do lists that you need to be carried out for screening tasks and those also had timelines so everybody could see from one document review the timelines the organization and how the project was crew was progressing the advantage of using Lib Dem you could all see it you could all look at how different aspects were being completed we could see what was done and what needed to be completed so if we saw that there were some tasks that need to be completed and you need to reassign those to another site that was easy to do using web booth the challenge of using it during the course of our on project management was that led beads themselves decided to rebrand and reorganize their system so within the middle of this massive project we had to learn how to really use a software you also need subscriptions for every user that logs in to read booth and you have to pay for each additional additional user that comes onto the project that was something and as we changed research assistants that became additional tasks we also could not assign a task to to speak to people when we were completing the screening of titles and the screening of abstracts those were done by two different researchers so each researcher had to complete that task list and there was no way to undo that and you ready so we switched to a project called todoist and we use this as a free app available from the app store and we were able to create task lists again for each one of the projects and the benefit one of the benefits of this was that for every task we can attach a note to it so if we were in progress or if we wanted to send a message to the next researcher to complete that we could put a note there or if there was a note about some difference in how that data needed to be coded or some concern you could also log that there then after putting that concern in there you could automatically generate an email to the next person that was going to that you were passing a test on to again we ran into the problem of not two people could be assigned to one task but you could copy and create a duplicate task so that and then we assign it to a second person so you could see that both researchers were completing the task list for screening of title and for screening of abstract the the version of todoist that we used is free however there is a premium version for about thirty eight dollars that you can purchase if you should choose to but the free version met our needs at the time now with respect let-let's management software so this is Wendy qua and I will take over from here in the presentation and the reference management software that we use was just a second well I try to get this was refworks and in refworks what we were able to do was to have all of the databases that we search export the searches into refworks so it allowed us to be able to see all of the references that were found based on our boolean search initially we were going to do the screenings within refworks however we quickly found out that because we had one account for this we ended up deleting each other's sources so backup is was a really great lesson that we learnt here so in the end what we ended up doing was we exported the reference lists for each of the disorders into an Excel spreadsheet like this and we had two reviewers then go in and do the initial screenings for the title as well as the abstract and then determine whether or not to include or exclude and if there was a disagreement we would reconcile the differences and so as a result of this what we ended up doing was Shelley had to and because she was the administrator for refworks she ended up having to move all of the sources that we did not include in the full in another folder so that we could see all of the sources we used it also in this reference management part to really determine the number of articles that were reviewed by each of the research sites as well as excuse me the number of articles reviewed which ones were eliminated which ones were included and then what the reliability was for each of the disorder areas so that is how we used the reference management it and Excel to keep track of the number of sources that we had so this kind of takes us to the bulk of what this presentation is about which is in vivo and we used nvivo in this in this project quite a bit but I just wanted to start in the beginning from when we had conducted a pilot study where we created operational definitions that were associated with assessment in augmentative and alternative communication and from that pilot study we use those definitions as the basis of our coding structure in in vivo and then we created a master template an master invivo template with these codes that for parent and child nodes and source classifications for each of the disorders so you can kind of see a sample on your screen right now of our coding definitions and we came up with major themes or parent nodes of what how and why and then the child themes or sub things were what the sub things became the child themes we know that qualitative research doesn't generally have operational definitions however because we had four sites and we also have had research assistants at each of the sites we needed a way to have everyone start on the same page however as we were coding the sources if we found if we kept on finding a theme that showed up we were able to add new definitions or new nodes to nvivo so that we were able to add and and sort of expand our categorization there so I'm going to move now to in vivo and this is kind of the outline for how what we're going to do in in vivo and I'll show you what in vivo as well so I want to start with sources we took the sources that we had from refworks and we redacted all of those so all of the reviewers are blind to the authors as well as a journal articles and the titles and so we then uploaded all of the redacted articles into nvivo so those articles now then were a source in nvivo and they were assigned an alphanumeric number and and then within that once they became a source what so we imported them into invivo we were then able to code them and we did two different types of coding we for each source that we had we actually developed a attributes or screening questions to evaluate the quality of the source so I'm gonna show you right now I'm gonna switch to in vivo and show you kind of how what that looks like here so this is I don't know if you can see this there we go so this is an article that that we had it's a source and within this article I will show you here so these are the attributes that we had for each of the sources we have to determine whether or not was a newspaper article or a journal article the majority of our sources were journal articles however we also wanted to code it for the type of population it was whether it was aphasia ALS cerebral palsy or autism we also coded the number of participants the age of the participants so that we might be able to run some queries later on the attributes of the sources I know that for people who use and notes instead of in vivo if you import your endnotes into and vivo this kind of is automatically created but we ended up having to code this for each of the sources individually so that's one type of coding that we did in terms of the attributes and then I also wanted to show you that once you once you classify all of your sources you can go into the classification classification section here and then look at the types of sources that you have the majority of our sources or journal articles so you if you double click on that you can kind of get an idea of but a really good summary table of all of the sources whether or not something was assigned or whether or not maybe you missed like this last article here this last source here everything was unassigned there so that you can kind of get an idea and run any coding queries that you might want to with this attributes the next thing I wanted to show you was coding and how you might code in an article so I've pulled up an article that we have here again like I said we ended up using a template that had nodes and definitions for the nodes already established because of the pre-existing pilot study that we had conducted and so if actually it was this one that I wanted to show you if in the article we ended up finding something that related to one of these nodes we just love the way that we were able to code online so if we said you know this this sentence here if this was something that I wanted to code hold on one second I'm not able to get my we just had tried this out earlier and we were able to let me just see if I can pull up another one and see if I can I'm not able to actually oh there we go highlight for some reason the other ones we're not highlighting sorry about that so still okay so if you wanted to highlight something I'm just gonna let's say this functional communication profile here okay for some reason I know my mouse is not working if this be the text versus the region no I believe that I am in text mode however I am NOT able to click on something that's it did highlight it earlier well what you would do is you would just basically highlight the text in your source and then you would drag it over to the node and then it automatically gets coded I can show you that we have coded some of the articles but if you what we love about encoding also or in vivo is that you can actually see what you have coded so if I wanted to look at what has been coded these are the the nodes there and then you can also highlight the text that has been coded so there is some highlighting there this is probably there we go there's one about there's a highlight in here and you can see the code on the slide through these coding stripes so predicts ACU's was one of the codes that we use this is probably one see this one I think this one had a lot more we also had reliability people do reliability and so we had two users to researchers code and what one of the ways that you can look at how well the differences in coding is using these coding stripes so if you go to pink stripes and you go selected nodes you can select the users and then right now it's showing everybody and I don't want to compare everybody I want to compare this person and myself and then click OK and there are the stripes that so Heather had coded this part and then if I go down some more you can see that there were areas here and then you can also highlight the area so you can see where the individuals have coded within each of the the individual users you could also show some stripes and so let's say I wanted to determine what had Heather coded at this node or at this area of the source I can say select nodes and then click all and then there you see all of the areas that Heather had coded or so she had coded some informal assessments standardized testing and then more areas of where I had done as well so you can kind of get a general overview of what were the differences in coding and the nodes that were used by each of the each of the researchers and research sites one of the things that is great also in terms of reliability if you wanted to see if you wanted to run a Kappa coefficient and vivo actually does that for you and so you would run a query if we go onto here and you can do query and you would go into coding Parris encoding I want to add this to the project so that it actually saves it and then you want to go into coding so if I wanted to I'm gonna say compare Heather and Wendy and that would be the query that I'm running and then for user a I would select Heather for user B myself and then I want a code at all or I want to run the query at all the nodes and all the sources so I run that and then this is what it ends up giving you and you can see your alpha in this column right here so alpha is great because it takes into account the probability and so it goes from 0 to 1 0 meaning that there is no agreement and 1 indicating that there is a total agreement and so what we had decided was anything that was 0.5 or below meant that we had to find an agreement and so the two researchers ended up meeting via skype and we would determine whether or not we would agree or disagree and then we ended up having to either unco door add the codes to the original article so one of the things in terms if you are going to be using multiple coders one of the things that you should do is in your file I'm just trying to look find it here not copy there is a way that what you have to do is actually make sure that when you go into the you have to actually make sure that you are in your options here in your general file you have to click prompt for you user on launch that way when and evil launches each individual has to put in their user profile so that in vivo then can track if it's a different user and then though that is how you can do your reliability okay so that was reliability and then we also wanted to show you how perhaps to run some queries and so we have done these are all of the sources for aphasia and so what I thought that I would do was do a coding query for some of the nodes that we've done so if I did a coding query there I want to add this to the project and I wanted to see in all of these sources how many of those sources talked about informal assessment and language so I'm just gonna title this in formal language and assessment or sorry informal assessment and language and then go into the coding criteria and so I'm going to go into advanced and I'm going to select the node so I need to expand the nodes here so here is informal method so I will select that and I will click OK and then you need to add it to the list which brings it to this box up above above and it's very similar to the boolean method and so what invivo does is here's your first criteria what's your what's your next criteria that you want to use so I'm going to select and again and then I will go into the area of AC user and I'm going to select language and so add that to the list and then I will run this query and so it gives me the references that the sources that actually had informal assessment and language and so you can see that there were two sources the reason why this shows up is because of the text and so this was one of the sources that we scanned and so we were not able to highlight the text we actually had to use what Christy had indicated before where we highlight the region rather than highlighting the text so this is this is a good way to sort of cross-reference some of the points that you have in your in your searches so those are the references and then you can go straight to the PDF documents as well and it'll show you in the PDF documents what parts were highlighted that were coded at both of both informal assessment and language so both of those nodes both of those codes so that kind of talks about just one second here I'm trying to see where we're at so we talked about attributes and then coding the text using the textual versus the region if at all possible we found that it was a lot better to use the text tool rather than the region tool and so if there was a way that we could optimize it in Adobe we tried to do that initially and we talked about the coding stripes and you can see another sample here of the coding stripes this is for autism a lot more coding there and then this is also another query that we had run in terms of the users so I think that we just and then reliability as well we talked about and running we did reliability on 50% of the articles and so quite a lot that we actually ended up coding here and these this is a sample for autism so we found out that we really we met weekly via Skype for our meetings just to keep up-to-date on what's going on and and really just to push us further on moving the project along we also used a random number generator which allowed us to determine which articles to code for reliability we wanted to just sort of talk about like how much we really found in vivo very useful as Michelle had mentioned earlier the fact that we had started the pilot project using Excel the Excel spreadsheet and we found that with this massive undertaking we just couldn't continue to use the Excel spreadsheet we also were able to use a lot of the invivo tutorials that were online as well as the videos we found those really invaluable one piece of advice that we would give is to do the actual training that q sr provides before you undertake something like this we found that we did the training probably halfway through our literature review and we learned a lot more one of the things that I said that I would do differently would be to run text queries a little bit more so you can what you can do in in vivo is with your source you can kind of look at what are the frequencies of words that come up that you know maybe are predominant in a lot of your sources so that's one of the things that I would do differently just to sort of get get an idea of maybe what themes would be coming up if you didn't have themes or codes prior to starting your literature review Michelle and Kristi did you guys want to share what you might do differently one of the things that as we are going on and and we've learned after the training was how to use memos and how to use journals and while putting while coding on each of the articles sometimes there was it was a really good example of that particular node and I wanted to make sure to note that so that I would come back and then use that specifically in an article or in a presentation also if I decided to add a note for whatever reason sometimes when you're coding articles between autism and aphasia autism we focused primarily on children aphasia was focused primarily on adults because that's who get aphasia but there were codes that needed to be created that fit that adult population and that maybe we didn't have in the original template and so creating a note about that or a memo about that and then being able to look back at those memos to have a clear understanding of our data was very very helpful and Crissy had some ideas about the use of journals as we were coding well one of the things that you can do if you're if you create a journal much like you would do in qualitative methods where you would journal as you were working your way through an interview or a video you could use a journal as you worked your way through articles as you started to perhaps see themes that were emerging or things that you thought that should be there that you weren't finding you can then go back and actually do an analysis on journals to see what to see what kinds of themes are coming out in your journaling by again by using text queries and and other query options so I think that that's a tool that we didn't use initially in our research in terms of the scoping reviews that we will use a lot more as we move to the interview and video parts of our project I think the interesting thing about using in vivo for literature reviews is that this really allows you to organize the articles in a much easier fashion and to do the coding much more quickly because it's a drag-and-drop process rather than you know trying to use some kind of spreadsheet where you're going and having to make X in boxes it went much more quickly doing it this way and so I really recommend it as a way to organize your data in regard to your lit reviews another nice feature that we used because we were there were so many of us those operational definitions you can put right into the nodes so when you can you click on the loader node there the one above it so if you click on that and you go down to node properties a screen come up and you can actually put your operational definition in to the project so as you're coding along and you're not and you're looking at your nodes and you're not sure what can be included you can easily pull up this definition to have it right there for you you can also use the color coding system to the side and you can give that specific note of color if you want to just to help you easily more easily identify the data when you're going through and looking for different themes and different codes and the analysis portion one of the things that I had mentioned about queries earlier was just trying to get an idea of what some of the themes were and so you can get an idea by running a word frequency query and so again I'm going to add this to the project but if I wanted to let's say top 100 words and I click on the word frequency criteria instead of a thousand I'm going to go a hundred and maybe I don't want it to be exact so I can kind of pull that over and I'm gonna have all sources and then I run that it's just gonna take a little bit so these are the top 100 words that are in all of the sources that I have loaded into this in vivo so these are all of the aphasia references that I have so and then you can see the count so the top usage of all of the was communication and then use was another one of Asia so and then you can also see what similar words there are so cinnamon synonyms that were used to the right here this is the summary but to the right you could build just below that the tab is word cloud so you can create a word cloud of all of the most frequent words that were in the sources that you had used you could also have a tree map as well as a cluster analysis so it kind of looks at branching of all of the words but we found it very useful to use the word cloud for some of the presentations that we did just so that it we can get an idea of what were some of the major themes or words that were used in all of the sources so communication and use aphasia obviously you can also let's say there are words in here that's in your summary where maybe it was not something like make so maybe it's a word that you're not really wanting to include you can right-click on that and then add it to the stop words list so that the next time you run this query it's not going to select this word in your word frequency list and you could we look at the cluster analysis again so for those of you who haven't started out with any initial work and you're trying to look at maybe themes across a variety of articles this analysis might be helpful could you go down a little bit so that we can see the so here you can see how the words are related to one another in or how they cluster together often and so this can be helpful in terms of looking at where there might be themes that are emerging in the literature so we see that the word user and information and conversation are clustering there together so there might be something there to look at so this is one way to look a bit at the work initially if you didn't already have those codes we already had done work and had codes but this might help you come up with some ideas for codes we look at the tree map because this looks a little different than the analysis IRA actually ran this analysis earlier and ours looked quite a bit different than this I'm not sure why where I think maybe we didn't do a hundred I think we might have done 50 or something but the tree map can also help you to see how things are kind of related to one another and what words are clustering together in the Articles and remember that when you're doing using in vivo as a way of looking at reviewing literature your sources are the articles and so what you're seeing is how things across articles are clustering so it's very interesting way to look at things if you were trying to use this in terms of a meta analysis you could be moving data or moving subject characteristics in as nodes instead of our study where we were trying to kind of extract a big question or an initial question if you were really trying to hone in on some kinds of specifics and needed to like take all of the subject data out of Araya T of articles you could do that by creating nodes or sub nodes that way as well I guess I a parent or child notes so these were some of the references that were cited in our presentation and this brings us to the end of our presentation and we wanted to leave some time for any questions that you might have okay thank you thank you everyone if anybody has any questions feel free to type them into your questions panel we did just get one the one of the persons wanted to know if Y in the study you decided to detract some part of the sources was it required by your method you mean to redact yeah sources we wanted to be blind to the sources because our fields there are certain experts or specialists in this area and so we wanted to remove any sort of bias that one might have when coding if we knew the authors of the source or if we knew where the journal came from or with what the journal was so that's why we ended up ended up redacting and we just that that was to try to strengthen the validity of the study by eliminating author biases yeah I see another question down there about bid for in vivo project files each project file had the nodes of how yes yes so there was a project file for aphasia a project file for autism a project file for cerebral palsy and a project file for ALS that's correct and we then engage those we put those files up on box and so that when we were done with them so that the other reviewer could go bring that file down and and the template for the nodes that we created was loaded into each of those project files so the starter template for the parent nodes and the child nodes was the same at the beginning for each of the disordered areas however as information was coded out of the articles the coders may have added new nodes as things emerged while they were coding one of the things that we learnt along the way was about over coding and you know if you over code it makes it a little bit harder to end up running queries and then comparing that across the different projects if we wanted to so that was something that you know we you could always aggregate the nodes so that was actually a nice thing to also learn if we did over code in one specific area there is also another question about a reason to combine the projects actually that's a great question we looked for this presentation we looked at before disorders however what I failed to mention was that when we were doing the reviews and there were articles that were related to assessment but that didn't have any of the specific disorders that we were focusing on what we ended up doing was putting them into a general assessment folder and so we also have begun work on this general assessment folder but because there are over a thousand sources in this folder what we ended up doing was dividing it out amongst the four sites and so we we have coded all of that and then we did end up having to combine all of the the sites files together so yes we did end up combining but not for these individual projects themselves and then in terms of whether or not an article would ever be in more than one project we Shelly not not to my knowledge I mean because each of the areas were disorder areas um we wouldn't see an autism article in the aphasia folder and that was the reason for creating that general file so maybe one of the sources in the title and abstract review was accepted but as you read the article that looks good assessment information in there is the assessment information in the article but it didn't apply to one of the populations that we were investigating somebody that was on individuals with traumatic brain injury or children with Down syndrome but we still saw value in the assessment information and that was the whole reason that we created that journal file oh the general project I guess you could call it for the inclusion and exclusion criteria the article had to be about the disorder area and we did it I think there might have been one article where both ALS and aphasia were discussed so those articles may have been in both projects but they were in both projects because there was some information about assessment of aphasia and some information about assessment of ALS in that particular article and it met the inclusion criteria if it did to me if it was as Michelle said if it was more general assessment information that it ended up in that general project so there actually were not four projects there were five because anything that had assessment information but that wasn't didn't meet the inclusion criteria for one of the four disorders but had assessment information went into a general assessment folder for projects so they were actually ended up being five projects and one sources that that became most the sources that that came up with were usually book chapters where you'd have a book chapter that would have a section on two disorders the book chapter lives AC assessment and children so there was a section on individuals or cerebral palsy and a section on autism so the next question looks was asking about where the invivo file was stored so that everyone at the different sites could access it so just to be clear each site had their own license for invivo so we all purchased in the invivo software the template that we created that had the codes was stored on box so everyone could access this template and then we copied the project to make it so that it was for my site it was cerebral palsy so I did all the coding for cerebral palsy and I created a new project in nvivo that was for cerebral palsy but it was based on the template that was up on box and when we were when the first coder was done coding they put that cerebral palsy project on to box again so that the second reviewer could go in and code randomly-selected 50% of those articles and they downloaded it's really if we found it was very important that you actually down but you can't work up on box you have to download it to your computer we would the second reviewer would review and do the coding and and when they were all done with their coding they would say that with a new file name and put it back up on box so we used box as the repository for this if we had we didn't have patient data because there are just journal articles we could use box when we do the interviews and identical anything that's identifiable we're going to have to use HIPAA Security server and I think right now for our videos we're using Panther file for that so another question was whether or not we created a framework matrix in invivo for our literature review we did not that's really an interesting thing to pursue yeah okay well thank you everyone and thank everyone for attending we were giving maybe just a few last minute questions are starting to trickle in but I know we're getting close to the top of the hour so for those questions that we didn't get a chance to get to we will follow up with you on an individual basis and just as a reminder everyone will receive a link to the recording of the webinar as well as to a link to the copy of the slides and I would just again like to thank everyone for attending today and really thank our presenters today for their job well done thank you thank you Thanks

Leave a Reply

Your email address will not be published. Required fields are marked *