Hype Compressor

Morning...sorry but I have to say it...I thought it was working, but when I try it in

https://h5validator.appspot.com/dcm/asset?result=4645636731437056

it doesn't work ;-(
in the zip 2 versions...the one with _2 is with you're version .js

Have you converted since I did the fix yesterday or is the file from before? I’ll have a look today.

O okay...I will try it later today...

The one I used was from 18 hours ago..

16 European time (Netherlands)

@cartimundi please read the message I wrote for you two posts previously. You are trying to compress the runtime. That is not what the compressor is meant for currently. It is meant for the hype_generated_script and not viable for your ads because you are including the generated script in your HTML file. Hence, the compressor in its current form isn't for your type of file distribution. I'll let you know once the compressor supports runtime compression and/or compression of inlined generated script.

Okay thanks...please let me know, when composer 2.0 is ready :wink:

There are two versions of the compressor
Some thought and hints:

:green_circle: The first one is useful in most cases (reducing the overall size of symbols and strings by using a lookup. This is a simple and trivial rearrangement of the data structure describing your Hype file. Putting such a symbol compressed file in a Zip-Container results in a smaller file than the unprocessed file (overall project ZIP size and also mod_deflate benefit). So, this kind of optimization makes nearly always sense but relies on regular symbols or similar text strings being used across scences to show results.

:orange_circle: The second compressor currently called "ZIP-EDITION" needs some more evaluating and this is what this sections is mainly about: The ZIP-EDITION compressor can now also handle any JS library (in those case it doesn't add the decompressor (4,4kb) and only returns the JS in a GZIP compressed and Base64 encoded manner. The logic thus far is that if you need smaller files the ZIP-EDITION compressor can produce them through additional compression. The result is a binary file that can't be embedded into an ASCII file (as half of that ASCII syntax is reserved). Hence, we need to encode the full byte array into an ASCII compatible realm. The most common used and MIME compatible way of doing this is using Base64. This adds 33% overhead as we need more Bytes to describe the binary data. The compressor also respects the RFC2045 standard that demands splitting lines into 500 char max chunks. This adds 3% of newline and quote data. Then we need the decompressor at around 4,4kb. All this adds up but in most cases we are still smaller than the original file at face value. Meaning if you need a smaller file this is for you but zipping this file produces a bigger file then zipping the original. Hence, if you need smaller individual files to pass some test or client demand or your goal is reduced storage or some added obfuscation is needed this compression is for you. On the other hand ... if overall ZIP size of your entire project is the optimization goal refrain from using ASCII file base compression.

:red_circle: If your server uses mod_deflate (as mentioned by @jonathan) I would also only use the symbol compression version as the on-the-fly zipping applied to the additional Base64 overhead most probably doesn't yield better results. The symbol compression is respected by mod_deflate, so you gain benefits. Another approach currently not implemented is using a more efficient Binary to ASCII encoding. There is ASCII85 (only adds 25% overhead) and yEnc (adds only 7-10% overhead). Either of these isn't MIME certified, but I will try to add them in the future anyway as an option if viable (they are used in USENET protocols and elsewhere). Hence, they are broadly used and would bring the compressor much closer to native zip values but because of the missing certification I mentioned this under the red dot.


Eventually, I will consolidate the compressor versions and add the different optimizations as toggle switches (if time allows it).

1 Like

This project can now be found on GitHub

Other great news! I just ran a simple test of loading a binary ZIP-File using a XMLHttpRequest and decoding it. I reduced the decoder to only single files, so no folders are supported but for the purpose of encoding hype_generated_script and the runtime or arbitrary JavaScript that is sufficient!


var zipReq = new XMLHttpRequest();
zipReq.open("GET", "${resourcesFolderName}/someCompressedJS.zip", true);
zipReq.responseType = "arraybuffer";
zipReq.onload = function (oEvent) {
  var arrayBuffer = zipReq.response;
  if (arrayBuffer) {
    var byteArray = new Uint8Array(arrayBuffer);
    // decode JS to console for this binary file test
    console.log(HypeCompressor.decompressArray(byteArray));
  }
};
zipReq.send(null);

it works flawless… hence native ZIP data directly from the binary source will now become an option. No ASCII overhead. HypeCompressor will be updated very soon with new API similar to...

v1.0.4 preview:

  • decompressArray
  • runBinary (files could end in .dat)

:smiley:

3 Likes

Whoa, that's an awesome technique!

2 Likes

It gets even better… one can ZIP the resource folder and read from it. For example, I use HypeResourceLoad to return images, I reinstated the recursive nature of Util.Unzip for that in the HypeCompressor.js but that only costs 1kb extra data on the library, so worth it:

function HypeResourceLoad(hypeDocument, element, event) {
	if(!data) return;
	var file = event.url.split('/').pop();
	var type = file.toLowerCase().split('.').pop();		
	for(var i=0; i<data.length; i++){
		if (data[i][1].substr(0, 1)=='.') continue;
		if (data[i][1] == file){
			switch (type){
				case 'jpg':
				case 'png':
					return 'data:image/'+type+';base64,'+window.btoa(data[i][0]);
					break;
			}
		}
	}
}

if("HYPE_eventListeners" in window === false) { window.HYPE_eventListeners = Array(); }
window.HYPE_eventListeners.push({"type":"HypeResourceLoad", 

Now, I ask myself what resources can be handled directly from a ZIP given that approach.

There is also another Idea, but it requires a modern browser: to fill the cache with the files from the ZIP, but caches doesn't support IE11. I hate that browser so much… and it probably should/could be disregarded.

1 Like

Oh wow, you're right that could be very interesting. I would imagine that ultimately making an optimum export boils down to trying different strategies and determining the winner; it is nearly impossible to know ahead of time how items will compress beyond generalities. A video may rarely make sense to compress this way (they are large, can't take advantage of HTTP offset requests, etc.) but if it is a single small video in a sea of compressible SVGs then it may make sense to lump together...

1 Like

I updated the JavaScript on GitHub to 1.0.4

Super!

is the browser link the same? ( I'm a zero in code ;-(
don't know what to do with the GitHub

https://playground.maxziebell.de/Hype/Compressor/

No, that is the symbol compressor. Recent updates on GitHub are for programmers and only relevant for zipping. Please read the first post in this thread and the README. As the zero fat zip loader requires a Request, so it can’t be used with file:// (CORS). I might do an export script for it (allowing direct exports and previews)... also the ZIP-Edition drag‘n’drop online version I have on my server doesn’t support it yet but everyone has a zip application on their computer anyway. Implementing full-zip loading can be done with code and is documented on GitHub.

Okay I’m gonna try

@cartmuni let me link this example of a file he sent me to play around with. I put on my server: https://playground.maxziebell.de/Hype/Compressor/ziptest/

The ad folder only contains an index.html and runs directly from the zip-file:

He won't be using this as it requires some manual/technical steps, but I thought it was a fun demo to show that you can run entirely from a ZIP-file.


PS: This example uses HypeCompressor.min.js in an embedded way at only 5.2kb. (8kb with HTML).
Could also be linked from Content Delivery Network

Hi Max, I tried the compressor way on a document with two symbols. Google Lighthouse reported a better performace from 86 of the normal document to 99 of the compressed one. You can find it here:
Cannabis FR uncompressed
Cannabis FR Compressed

Let me know if You like to see the complete report. Duplicated symbols would probably more profit from this technique but I was surprised anyway. Good work!

2 Likes

Hello Olof,

that is great. Thanks for the feedback. Looking at the source I saw that there seems to be a slight oversight in the string compression (not symbol compression). As seen in the following screenshot. This is not a big issue but once I fix the lookup routine, it should only contain the repeating SVG string once. My guess is the lookup key comparison fails because of the escaped character. So, I need to make sure both ends of the lookup comparison are escaped or unescaped the same way.<s/trike>

The small glitch in the string lookup is now fixed. I made a classic error when determining the lookup index, and if it turned up as the first index (being a 0 in an array context) I only checked against a simple false statement !$fid (being my found id) and a 0 index would also be considered a false and return a not found from the lookup. That resulted in the optimization creating repeated entries for strings at index 0. Now, I am checking against $fid===false. But either way no harm done as the strings have been anyway repeated as is and that was what we wanted to fix in the first place. Hence, older symbol optimizations run just fine … newer ones will get a minimal better compression from now on.

1 Like

Interesting topics you´re working on... :thinking::grinning:

2 Likes

Maybe someone is wondering about the cannabis project and the untidy resource folder. We used Hype for a couple of social media posts of different topics as videos and did't use it as a HTML. Hypes Video export helped a lot to produce these videos in a very straight forward way. Max Ziebells Hype Data Magic on top makes the adaption process in six other languages a finger snap.

5 Likes