When the total size of the project is too large (for example, my Project 11g), it will not be decompressed after compres

Posts: 18
Joined: Sat May 12, 2018 3:39 am
Platform: Windows

Mon Jun 03, 2019 2:11 pm Post


Posts: 515
Joined: Thu May 28, 2015 9:41 pm
Platform: Windows

Tue Jun 04, 2019 8:10 am Post

a6611035 wrote:我的总项目大小是11g,但在设置中,当设置备份来选择压缩或工具栏中的主动压缩时,压缩文件为4g,打开并解压缩,提示错误,无法解压缩,但是当我的项目与小时相比,减压是没有问题的,因为我认为这是一个错误。

I do not read Chinese, but Google Translate says:
My total project size is 11g, but in the settings, when setting up backup to select compression or active compression in the toolbar, the compressed file is 4g, open and unzip, prompt error, can't unzip, but when my project Decompression is no problem compared to hours, because I think this is a mistake.
I hope to fix this bug.

Google translate is less than optimal, but it looks like a size issue with respect to the unzip algorithm being used. Shouldn't be one; an up-to-date zip algorithm has a file size limit of ~16 Exabytes.

Posts: 321
Joined: Tue Mar 07, 2017 8:28 pm
Platform: Windows

Wed Jun 05, 2019 1:52 am Post

the compressed file is 4g

Unless I'm mistaken, Fat32 has a 4 gig size limit, so it could be the drive format, not the zip algorithm.

Posts: 515
Joined: Thu May 28, 2015 9:41 pm
Platform: Windows

Wed Jun 05, 2019 11:24 am Post

steveshank wrote:
the compressed file is 4g

Unless I'm mistaken, Fat32 has a 4 gig size limit, so it could be the drive format, not the zip algorithm.

But the older zip algorithm could only handle 4gb files. So, at least 2 possible limiting issues here.

I built a huge project by creating a text of Lorem Ipsum (the generator online was very helpful) of 99 paragraphs, and copying it many times in a document. Added some large pictures (gargantuan maps are useful for this), and then duplicated the documents inside chapters, and duplicated the chapters many times.

Scrivener slowed incredibly at about 90 chapters of 11k words each.

At 12 GB (measured by 7zip), the interface simply crawls. Menus respond in minutes, not seconds. Honestly, I'm amazed it still functions. Forced a backup file, and exited.

Scrivener's Backup (Ziptest001.zip.bak) was 4,396,392,448 bytes. 4.09 GB.

However, there should have been two backups: 1, when I forced "backup now," and a 2nd when I closed the project. I didn't modify, so perhaps that's the reason. Scrivener did not like exiting (I honestly don't blame it; 12 GB... sheesh).

There are errors in the backup file.

"Unconfirmed start of archive"
"Data after payload data"
"Data error"

I'm running NTFS, which does not have a 4 gb limit.

When I used 7zip to create a zip of the project (including its folder), the size of the archive was 5.7 GB (48% compression). Took 10 mins instead of 30.

Based on this data, the zip functions are buggy with large projects. If these are QT code, those devs need to be warned of this.

User avatar
Posts: 91
Joined: Thu Dec 05, 2013 4:49 pm
Platform: Mac, Win + iOS

Fri Jun 07, 2019 7:06 pm Post

Nothing useful to contribute here, just wanted to say you made me look up what EB was...


And I found ZB... and then YB. :shock: :shock: :shock:

My brain now hurts.

Seriously, what single zip file could possibly be 16EB??? What kind of computer can even process that much data??

After all that, 11GB does not seem overly large, but still...

What (primarily) text document could possibly be so huge??
You will find more evidence of the ridiculousness of humanity in the bathroom mirror than any other place in the world.

Posts: 1113
Joined: Wed Aug 27, 2014 2:06 pm
Platform: Win + iOS

Fri Jun 07, 2019 7:15 pm Post

Sparrowhawk wrote:What (primarily) text document could possibly be so huge??
It seems that most people with mega-sized projects are storing lots (and lots) of images.
I’m just a customer.

User avatar
Posts: 4858
Joined: Fri Feb 02, 2007 5:22 pm
Platform: Mac

Fri Jun 07, 2019 7:25 pm Post

You run into those kinds of volumes very quickly with machine learning datasets. A single autonomous car might generate 4 TB a day, and Facebook users collectively generate 4 petabytes a day.

For Scrivener projects, the really big ones are mostly incorporating a lot of video, although someone writing a heavily illustrated book (travel, photography, art history...) will obviously have a lot of images.

Scrivener Support Team