Mostly for SEARCH efficiency

db
dbc183c7
Posts: 146
Joined: Sat Oct 25, 2014 12:59 am
Platform: Windows

Tue May 12, 2020 4:36 pm Post

1) It seems that from another post, larger files are more quickly handled than smaller? (The file open/close operations are mostly an issue for indexing, not actual searching?)

2) Does the depth of severally-branched branches from the trunk of the tree (thrice or more) have a relatively negative impact vs for an un-twigged branch for the textbase/docbase? (Do subdocuments 'count' as branches?)

3) Does a tree trunk's main branching (25+) have a relatively negative impact vs none beyond DRAFT and RESEARCH?

(Overall, ~ 25k files in a 430 MB project.)

- Windows 10 64b; i7 7700, 2T main storage, 32G RAM;
Scrivener 1.9.16

Online
User avatar
kewms
Posts: 7154
Joined: Fri Feb 02, 2007 5:22 pm
Platform: Mac

Tue May 12, 2020 4:41 pm Post

Scrivener is not intended to be a dedicated database program. If you are asking these kinds of questions, your use case is probably outside of Scrivener's intended scope.

Scrivener creates a separate search index as you work, and searches that index rather than crawling the Binder directly. So the exact structure of the Binder is less relevant than the overall number of words in the project.

Katherine
Scrivener Support Team

db
dbc183c7
Posts: 146
Joined: Sat Oct 25, 2014 12:59 am
Platform: Windows

Fri Feb 12, 2021 8:21 pm Post

Thank you kewms -- My very slow response here is mostly due my awaiting the Win 3.x Scrivener. :oops:
I promise this is my last post (to you-ish) on this.

Will/is the indexing and/or lookup more efficient in broad brush strokes were the text in one large document or in my case, say, 20,000? (I do understand the organizational advantage of having several focused texts instead.)

[Windows 10 x64 [19042 (system info)] updd; 32GB, i7.
Scrivener 1.9.16 updd.]

Online
User avatar
kewms
Posts: 7154
Joined: Fri Feb 02, 2007 5:22 pm
Platform: Mac

Fri Feb 12, 2021 9:54 pm Post

What exactly is your ultimate goal?

In my opinion, Scrivener is not the appropriate tool for managing a database of 20,000 files, regardless of how that database is organized.

Katherine
Scrivener Support Team

db
dbc183c7
Posts: 146
Joined: Sat Oct 25, 2014 12:59 am
Platform: Windows

Mon Feb 15, 2021 7:44 pm Post

Thank you Katherine.

What exactly is my ultimate goal?

Serendipity.
Searching for Augustinus' writings, I'd like to discover unsuspected secular considerations of jus bellum and national traitors; searching for Grotius on international law, encounter the foundations of common law, the Roman ius gentium, and Roman Catholic Natural Law.
Search, find, link. (Naturally, once found, re-referencing will often be quicker by labels and keywords.)

Functionally, it is a matter of discovering (ir)relevant :wink: references as quickly as possible for the digressive way I write (Herodotean).

I know I have a lot of files (recounted, 25k docs), ranging in size from maybe 15 bytes to Montesquieu's Spirit of the Laws, ~ 272KB (takes 4-5 seconds to load).
One the one hand, I could separate files like Laws into smaller files; on the other, combine epigrams.
The 3rd of course, fewer files. (Or is it data?)

g