Scrivener 2.0.2 now available

There is no harm in downloading from the web site and updating manually.

I’m not sure why this is greyed out though; never seen that happen before. It isn’t a case of the program thinking you are already updated—it doesn’t check to see if there are updates before it checks to see if there are updates, in other words. :slight_smile: Even if you run it and you are up to date, it will still be available to try again. I tried disabling my 'net connexion to see if it detects that, but it doesn’t. Does this condition persist after a relaunch?

Hello,

thank you for the fix! :smiley:

I’ve got a question.
I’ve absentmindedly accepted the automatic upgrade and Scrivener did indeed upgrade to 2.0.2.

I said “absentmindedly”. That is because I prefer to manually update my software, when I can. I also like to store the DMGs of what I have on my mac. Just in case.

I noticed, however, that the 2.0.2 version which comes with the manual download is smaller. 10MB smaller. The inner structure of the package is slightly different and it seems to have been last modified later than the automatic update.
Which one is better?

Thank you! :slight_smile:

There’s no difference between the one that comes by automatic update or the one on the DMG - the automatic update just downloads a zipped-up version of the exact same file that is on the DMG. So there shouldn’t be such a file size difference.
All the best,
Keith

Thanks, Amber. Upgrade went fine. And after the upgrade, the “problem” of the grayed-out “check for updates” went away (not after relaunch). So no big deal. I check for upgrades automatically anyway…just thought it was strange. David

Hello, sorry for the late reply.
I am afraid that they really are different.
img543.imageshack.us/img543/4435 … 1at200.png
On the left the automatic update one. On the right the DMG one.

They both seem to work, but I’m still curious about the difference.

Actually, I looked into it a little more and it seems to down to the zipping methods used. Because Sparkle update zip files need to be code-signed, I built a small utility program that automatically zips up Scrivener and does the code-signing. It uses the commandline zip tool to compress Scrivener, and this was the issue. I wasn’t using the -y option, so symbolic links were getting expanded and turned into the underlying files, meaning some files were appearing twice. Now that I’ve added that option, though, the updater version of Scrivener ends up smaller rather than larger, because the zip utility compresses some image files to be smaller, even unzipped, than they are in the original, and so far I’ve had no luck preventing this even with the -n option. Who knew using the zip utility would be so complicated…?

Best,
Keith

FYI on my OSX 10.6.5 system, zip -n of a jpeg does the right thing. The invocation used was

zip -n .jpg zipfilename list-of-files-to-zip
or
zip -n .jpg:.png zipfilename list-of-files-to-zip

The ‘.’ before the extension seems to be optional so
zip -n jpg zipfilename list-of-files-to-zip
also works. The space after the -n also seems to be optional in this case, in fact I couldn’t get it to misbehave.

This is zip 3.0, adding -v to the command line will increase the verbosity and give a commentary on each file and the compression applied to it.

Hmm, maybe it makes a difference if you are zipping a directory or a list of files - my guess is that the -n command isn’t recursing into directories and may only get applied at the top level. Try this:

  1. In the Terminal, navigate to a folder with a copy of Scrivener in it.

  2. Type:

zip rqyn .tif:.jpg test.zip Scrivener.app

Or:

zip -r -q -y -n .tif:.jpg test.zip Scrivener.app

You’ll end up with a zip file that is 38MB. Unzip it, and the unarchived version of Scrivener is 55.6MB version, whereas the original is 64.8MB - and the discrepancy is caused by the extra compression of image files. I don’t seem to be able to find any way to prevent the zip utility from compressing files inside the /Contents/Resources folder (even -0 for “Store only” results in a file that is 55.6MB…).

All the best,
Keith

Oh, I see.

Thank you! :slight_smile:

Keith,

tl;dr you’ve got two different file extensions for tiffs, .tif and .tiff

Your no-compress only mentioned .tif so the others were being compressed. It should be:

zip -q -r -y -n .tif:.jpg:.tiff test.zip Scrivener.app

Then i get
74328 -rw-r–r-- 1 epo admin 38053161 15 Dec 15:44 test.zip
before I got the smaller file
74288 -rw-r–r-- 1 epo admin 38033707 15 Dec 15:48 test.zip

I discovered this in two stages first I changed your incantation (to remove -q and add verbose output) to:

zip -v -r -y -n .tif:.jpg test.zip Scrivener.app

this produces loads of output so I narrowed that down to lines mentioning .tif by

zip -v -r -y -n .tif:.jpg test.zip Scrivener.app|grep tif

This was still too verbose so I excluded all lines showing non-compressed files (contain the string ‘0%’)

zip -v -r -y -n .tif:.jpg test.zip Scrivener.app|grep tif |grep -v ‘0%’

and got loads of .tiffs listed, adding ‘:.tiff’ to the list eliminated all those, you probably also want to add ‘:.jpeg:.png’ for safety.

Hmm, nope, for me that still results in a file that is 55.6MB rather than 64.8MB, even including .tiff (I’d already checked that some of the .tif files were still being compressed, I believe).

All the best,
Keith

Does this extra compression cause any issues/degradation of the graphics files? If it’s non-destructive, maybe part of your release work flow could include using the zip utility to compress and then uncompress the application. With that as your starting point, I would assume you could get the same (smaller) installed size no matter what distribution path you choose.

How odd, the files are identical (have the same contents) but occupy different amounts of disc space.

As a sanity check I tried to package up the file with tar, on unpacking the directory sizes were indeed identical.

It is strange - if you ctrl-click on Scrivener in the Finder and choose “Compress” to create the zip file, when you unzip that archive the file size is the same as the original. So it is just down to the command-line zip utility doing extra compression.

Robert - I don’t know, to be honest. I imagine there will be some difference, but I don’t know how noticeable it is - I haven’t noticed any differences. I’d rather keep the automatic update version the same as the original rather than compress further, though.

All the best,
Keith

Here is my theory, after doing some examination in Path Finder and Terminal. I took two TIFF files, one had ZIP compression applied to already, and the other had no compression applied to it. I wanted to see if the zip tool was actually adding zip compression to TIFF files, as unlikely as that would seem. The result of that test is: it’s not. However I did get the curious drop in file size—not very large, but a drop. Test file one had 217,088 bytes prior to being zipped, and after being zipped it had 212,992 bytes.

I loaded both of these TIFF files into Photoshop layers and enabled the Difference mapping mode on the top layer. This produces a result which maps RGB(0,0,0) to zero differences. So for each pixel in the layer, it is compared with the composite of the pixel data below that layer. If the pixel data is precisely identical, the resulting colour will be black. If there is any deviation, it will be non-black. The result: a perfectly black image, meaning the two files are identical at the bitmap level, despite the drop in file size.

So went to Path Finder and examined the two files in depth. The report file generated for each was pretty much the same, excluding of course things like where the file is actually stored on the physical hard drive. Then I came across this line.

Pre-compression: Resource Fork Size: 286 bytes logical, 4,096 bytes physical
Post-compression: Resource Fork Size: 0 bytes logical, 0 bytes physical

There you have it. That most certainly accounts for the discrepancy. With the many hundreds of TIFF files in the Scrivener.app package, even if the byte counts are very small, they still have to use a 4kb block to represent it, so at minimum each file has 4kb of useless data which zip is stripping out. This amount will increase depending on your Photoshop settings. I have pretty conservative settings, but if you have it set to insert icons and other meta-data into the file, this number could go up pretty quickly.

My speculation is thus: the underlying filesystem link between a file’s resource fork and data fork is considered by the zip utility to be a type of link, and so when the -y flag is used to not flatten links, the resource fork gets dropped. This probably, by the way, produces a zip file that is “nicer” for Windows users to look at. It might not have that MACOSX folder at the top which stores filesystem meta-data. Not that such is relevant to the Scrivener.app bundle.

Which is where FS tuning comes in. Reduce the inode/block size of the FS and the amount of empty data will go down.

But who really wants to rebuild an FS for this?

I can’t see why an casual or even semi-pro user would need to bother with tuning. Now, if you are capturing and writing raw HD feeds to a RAID-0, you would definitely want to tune, but in that scenario a larger block size is more advantageous at the IO level. I bet Apple just banks on the larger block size to make sure multimedia playback is skip-free for the majority of users, even though it does mean a large amount of waste.

Given that dmg is the common packaging scheme for OSX why are zip files used at all? (BTW we know that tar seems to work.)

Keith, sorry for fixating on the tif-file-size red herring when it was the unpack size that was wrong.

EDIT: does the file compression discussion here help? xahlee.org/UnixResource_dir/macosx.html

Actually FS tuning is pretty key to a really responsive system. The 4k block is used as most “end user” files are well over 4k and so the slight bloat is not even noticed. The other advantage to a 4k block is read write optimization and cache. The slow drive is very good at reading a larger contiguous block of data which can be shoved to RAM for program use. Various media files and larger “project” files (docx) get big advantages with this.

Move to a slightly more obscure idea though, something like scrivener, and a smaller block size would actually be an advantage. Small chunks of text may not total 4K (most of my chunks are less than 400 words until I start compiling) so I am losing tons o’ useable space if I have 200 scriv scenes. Take it a step further and virtual mem (typically raw actually or a loop back device that is access as raw) really needs the lowest block size possible. A web server that is highly loaded wants a small block size to allow for optimization of the drive caching and optimization of read (based on ability to flush buffer to client).

Which is my long winded way of saying normal folks have no idea what we are talking about and really shouldn’t care. Right?

Whoa!
I couldn’t imagine my curiosity would start this sort of discussion!

I’ve read everything (very interesting), thank you. :slight_smile: