Sun May 04, 2014 7:44 am Post
Sun May 04, 2014 12:38 pm Post
Mon May 05, 2014 12:37 am Post
nom wrote:First, I'd create a folder for each source "document" (defined in whatever way makes sense for your material).
nom wrote: that comments are more flexible because they highlight the relevant text (whether it be a single letter or several lines) and they can overlap, or even be nested within, text containing other comments.
nom wrote:Third, I would place detailed annotations and comments as memos in separate documents with links back to the source document within Scrivener (much like you described).
Mon May 05, 2014 11:58 am Post
Fanlike wrote:Hi Nom, thanks for your reply, this is really helpful. Thanks also for spending some of your time trying out my idea. It seems you actually understood my proposal, even if my writing is not always clear.
Fanlike wrote:Your method sounds like another very nice alternative, however I'm not sure it could work well for carrying out the kind of analysis I have in mind, in particular for what concerns your first suggestion. Correct me if I'm wrong.nom wrote:First, I'd create a folder for each source "document" (defined in whatever way makes sense for your material).
Anthropologists working in different cultural contexts (sometimes using other languages like in my case) have to deal with different layers of interpretation when analyze their data. Those different layers emerge with the unfolding of the research. It is almost impossible to anticipate your understanding of a text. As a consequence, it is also very difficult to split it in advance in meaningful sub-folders. You could do it in progress, but wouldn't it turn out in an endless creation of micro subfolders? I will give it a try anyway, I'm not sure mine is the I the best way.
Fanlike wrote:Moreover, my method doesn't require any cut and paste. You just have to select a sentence or paragraph that relates to the document/node you have previously created, and then link them (right click/link to document). After that, just by browsing the document/node’s inspector panel (in particular the "document references") you can easily retrieve all the sentence or paragraph that you linked to the node. In this way, when you are analyzing a specific node, you can visually organize and make comparisons in a very intuitive way. Those documents listed in the inspector panel ( "document references" section) can be opened in separated quick reference windows that you can move around as you like (Right click/open as quick reference). I'm not sure, but this seems an efficient workflow.
Fanlike wrote:nom wrote:Third, I would place detailed annotations and comments as memos in separate documents with links back to the source document within Scrivener (much like you described).
This also sounds more than reasonable. For now I prefer to write memos in the documents' notes, just because I'm in the field and need them to be "closed" to first hand data. I guess when I'll have to work with comparisons and generalization it could work very well also in your way.
Fanlike wrote:Btw, I raise this discussion basically because I'd like to take a decision about whether to purchase a QDA or carry out my analysis within scrivener. It seems you have a long experience in this, what is your suggestion? Is a proper QDA software absolutely necessary?
Thanks for sharing your ideas
Mon May 05, 2014 2:49 pm Post
Mon May 05, 2014 10:12 pm Post
Tue May 06, 2014 12:10 pm Post
Fanlike wrote:...By using your method you complicate the process when you edit the main document and create sub-folders
Fanlike wrote:4) How do you transcribe your interviews?
Fanlike wrote:Don't you think that a feature that allows coding and marking files audio/video would be very handy? This is absolutely my wish for next versions of scrivener.
Tue May 06, 2014 1:03 pm Post
reepicheep wrote:I'd never consider Scrivener as an equal of Nud*ist let alone of Nvivo. Too much manual intervention involved. If it was the only tool at my disposal I grab other tools to do specialised functions; NLTK for the grammar/textual analysus, Lucene for fast retrival of texts, one of the concordancing applications for co-locations (I use AntConc), and the R Project's R to handle that statistical analysis with one of the re-implementations of VarBrl. But it would be a lot of work to manage all that. Might prove simpler and easier to write an open-source Nvivo work-a-like.
Mon Jul 02, 2018 2:57 am Post
Mon Jul 02, 2018 4:17 am Post
In total there are 4 users online :: 0 registered, 0 hidden and 4 guests (based on users active over the past 5 minutes)
Most users ever online was 1048 on Mon Feb 06, 2012 9:07 pm
Users browsing this forum: No registered users and 4 guests