If you have ever come across the task of image resizing in a browser, you probably know that it is very simple. In every modern browser there is a special element <canvas>. You can put the image by defining the sizes. Five lines of code – and you got the image you wanted:
function resize(img, w, h) { var canvas = document.createElement('canvas'); canvas.width = w; canvas.height = h; canvas.getContext('2d').drawImage(img, 0, 0, w, h); return canvas; }
Afterwards, with the help of the canvas you can save the image as JPEG and send it to a server. What is the trick here? It all comes to the image quality. If you place such canvas and regular <img> that contains the same picture in it (source, 4 Mb), you will see the difference clearly.
For some reason all modern browsers (desktop or mobile) use a cheap method of affine transformation for drawing on canvas. Let’s recall the essence of the method. To calculate the position of every point of the final image, four points of the original image are taken. This means that reducing the image size in more than two times causes holes in the original image – pixels that are not taken into account in the final image. It is because of these forgotten pixels that the image quality suffers.
Of course, such picture should not be shown to respectable people. It does not come as a big surprise that the issue of resizing quality with the help of canvas is popular in stackoverflow. The most common advice is to shrink the picture in several steps. Indeed, if a too strong image reduction does not take into consideration all of the pixels, why not change the image gradually? Over and over again, until we get the desired size. Here is the example.
Undoubtedly, this method produces a far better result, since all the points of the original image are recognized in the final image. Another question is how exactly they recognized are. It depends on the step size, original and final image sizes. For example, if we choose a step size with the value 2, the reduction will be equivalent to supersampling. The last step is determined by chance. Every once in awhile, if you’re very lucky, the last step may also be equal to two. But if you are unlucky, you might be obliged to reduce the image by a pixel at the last step and as a result get a blurred image. Compare the difference by a single pixel on the below pictures (Source, 4 Mb):
Maybe we should take a completely different approach? We have a canvas from which one can get pixels and there is a super fast JavaScript that can quickly do the resizing. This means that we can implement any resizing method not relying on browsers, for example, supersampling or convolution.
All you need to do now is to add full-size picture to the canvas. Here is how the perfect case might look like. We leave the implementation of resizePixels behind the scenes.
function resizeImage(image, width, height) { var cIn = document.createElement('canvas'); cIn.width = image.width; cIn.height = image.height; var ctxIn = cIn.getContext('2d'); ctxIn.drawImage(image, 0, 0); var dataIn = ctxIn.getImageData(0, 0, image.width, image.heigth); var dataOut = ctxIn.createImageData(width, heigth); resizePixels(dataIn, dataOut); var cOut = document.createElement('canvas'); cOut.width = width; cOut.height = height; cOut.getContext('2d').putImageData(dataOut, 0, 0); return cOut; }
It is all so commonplace and mundane at first glance. But thank goodness, browsers’ creators do not let us bore themselves. Of course, such code can work in a few cases. The trap lies in the unexpected place.
First of all, let’s discuss why resizing is needed in the first place. For example, you have a task to reduce the size of photos before sending them to a server which would save the user’s traffic. It is more relevant for mobile devices with slow connection and paid traffic. And what photos are uploaded more often on the devices? It is those taken from the cameras of the same devices. For instance, camera resolution of an iPhone is 8 megapixel. But it is possible to capture 25-megapixel panoramic photo (it is even bigger on iPhone 6). Camera resolutions on Androids and WindowsPhones can be even higher. Here one learns about limitations of the mobile. Unfortunately, it is impossible to create a canvas that is more than 5 megapixel in iOS.
The reasons of Apple are pretty obvious. They have to make sure that their devices work properly with limited resources. Indeed, using the above-mentioned function the whole image fills the memory three times! Once, it is used for a buffer connected to the object Image where the image is unpacked into, a second time — canvas pixels, and a third time — a typified array in ImageData. It will take 8 × 3 × 4 = 96 megabyte of memory for an 8-megapixel image, and 300 – for a 25-megapixel image.
However, series of testing proved that problems are not confined exclusively to iOS. With some probability Chrome would start drawing several small images instead of one big image in Mac, while it would give a blank page in Windows.
But if there is no possibility to get all pixels simultaneously, perhaps it is possible to get them in parts? You can upload a picture to the canvas in several pieces. The width of each piece is equal to that of the original image, while the height is far smaller. At first, let’s add the first five megapixels, then we add the remained ones. Or you can choose two megapixels. It will reduce the memory usage even more. Fortunately, in contrast to the convolution two-channel resize, supersampling resize is one-channel i.e. you can not only get the image in portions, but also process it one portion a time. Memory will be used only for an Image element, a canvas (for example, 2 megapixels) and a typified array i.e. for a 8-megapixel image (8 + 2 + 2) × 4 = 48 megabyte which is two times less.
During the method implementation it was decided to measure how long it takes for each part. You can test it yourself here. Here is what we got for a picture with resolution 10800×2332 pixels (panorama from iPhone).
Browser | Safari 8 | Chrome 40 | Firefox 35 | IE 11 |
Image load | 24 ms | 27 | 28 | 76 |
Draw to canvas | 1 | 348 | 278 | 387 |
Get image data | 304 | 299 | 165 | 320 |
JS Resize | 233 | 135 | 138 | 414 |
Put data back | 1 | 1 | 3 | 5 |
Get image blob | 10 | 16 | 21 | 19 |
Total | 576 | 833 | 641 | 1243 |
Let’s take a look at the very curious table in more detail. Great news is that resize in Javascript is a well-developed feature. Of course, it is 1.7 times slower on Safari, than in Chrome and Firefox, and 3 times slower on IE, but in comparison to the uploading time of an image on browsers and receiving data it is not so much.
Another remarkable feature — there is no browser where a picture is decoded to the event image.onload. Decoding is postponed until the moment when it is really necessary, e.g. displaying on a screen or a canvas. An image is not decoded in Safari, even when it is put onto the canvas, since a canvas is also not displayed on the screen. Decoding happens only when pixels are extracted from the canvas.
In the table you can see the full time of drawing and data extraction, though in fact the operations are performed for every second megapixel, and the above-mentioned script shows the time of each iteration is counted independently. If you look at the indicators, you can clearly see that despite the fact that the overall time of data extraction for Safari, Chrome and IE are almost identical, in Safari almost the whole time is spent on the first calling in which the process of image decoding happens, while in Chrome and IE the time is equal for all the callings and indicate the overall sluggishness of data movement. The same applies to Firefox in less degree.
Presently this approach seems to have a great potential. Let’s test it on mobile devices. The testing team had HTC 8x (W), iPhone 4s (i4s), iPhone 5 (i5), Meizu MX4 Pro (A) on hand.
Browser | Safari i4s | Safari i5 | Chrome i4s | Chrome A | Chrome A | Firefox A | IE W |
Image load | 517 ms | 137 | 650 | 267 | 220 | 81 | 437 |
Draw to canvas | 2 706 | 959 | 2 725 | 1 108 | 6 954 | 1 007 | 1 019 |
Get image data | 678 | 250 | 734 | 373 | 543 | 406 | 1 783 |
JS Resize | 2 939 | 1 110 | 96 320 | 491 | 458 | 418 | 2 299 |
Put data back | 9 | 5 | 315 | 6 | 4 | 14 | 24 |
Get image blob | 98 | 46 | 187 | 37 | 41 | 80 | 33 |
Total | 6 985 | 2 524 | 101 002 | 2 314 | 8 242 | 2 041 | 5 700 |
What strikes the most is «outstanding» performance of Chrome in iOS. Indeed, up until recently all third-party browsers in iOS were able to work only with the versions that did not have jit-compilation. В iOS 8 an opportunity appeared to use jit, but Chrome did not manage to adapt to it as yet.
Another peculiarity is that the parameters of performance in Chrome and Android are radically different for a drawing time period, while all other aspects are almost identical. There are no errors in the table! Chrome can in fact perform differently. It was already mentioned that browsers are lazy to upload images. Thus nothing will stand in the way for a browser to release the memory taken by an image if it decides that a picture is not needed anymore. Obviously, when a picture is needed for the next session of drawing on a canvas, it should be decoded again. In the case, the picture was decoded 7 times. It is clearly indicated by the drawing time of independent chunks (remember, in the table only the overall time is displayed!). Under such conditions the decoding time is hard to predict.
Unfortunately, problems do not stop here. We must admit that the situation around Explorer is little confusing. There is size limits for each canvas side in 4096 pixels. And parts of the picture beyond the limits turn simply into transparent black pixels. While the limits of maximum canvas area are pretty simple to get around by cutting the picture horizontally and thus spare some memory, one has to work on the resize function pretty hard or merge the neighboring pieces in stripes which will only take more memory to bypass the width limit.
At this point, we decided to leave the thorny problem alone. There was also a completely crazy idea: to not only resize, but also decode jpeg for a client. Cons: only jpeg, the poor performance for Chrome in iOS will get even worse. Pros: predictability in Chrome in Android, no size restrictions, less memory required (no need to constantly copy to a canvas and back). Still, we had not chosen that version, although there is a jpeg decoder in pure javascript.
Part 2. Let’s return to the beginning
You might remember how in the very beginning we had got a very good result after gradual decreasing in 2 times in a best case scenario, and blurry — in a worst case scenario. What if we will try to get rid of the worst-case version not changing the approach too much? Let us remind ourselves that the blurriness effect happens when on the last step it is necessary to decrease the image by just a little bit. What if we make the last step the first by reducing the image in a certain yet unknown proportion and afterwards strictly in 2 times? At the same time, it is necessary to take into account that the first step should not exceed a 5 megapixel area limit and 4096 pixels for a width. In this version the code gets obviously far simpler than in the case of a manual resize.
On the left there is an image reduced in 4 steps, on the right – in 5 steps, and there is hardly any difference to see. Half the battle is won. Unfortunately, the difference between two and three steps (not to mention the difference between one and two steps) can still be seen pretty clearly:
Although now there is less blurring than was in the very beginning, we would even say that the image on the right (received in 3 steps) looks far nicer than the left one, which is too sharp.
One could certainly work on the resize a little bit trying to decrease the number of steps at the same time and bring the average step coefficient closer to two, it is important not to overdo. Browser restrictions do not allow to do something radically better. Let’s move to the next topic.
Part 3. Series of photos in a row
Resize is a pretty time-consuming operation. If one starts killing a mosquito with a bazook and resize the pictures one after the other, a browser will freeze and stop responding to user requests. It is recommended to use setTimeout after each resize step. But here another problem arises: if all images are resized simultaneously, the memory will be used simultaneously too. This can be prevented by organizing the queue. For example, one can launch the resize of the next image once the resize of the previous one is finished. But there is a more general solution which we preferred when the queue is created inside the resize function, not outside. This guarantees that two images will not be resized at the same time, even if the resize will be requested from different places simultaneously.
Here is a full example: it includes everything that was already in the second part plus the queue and timeouts before long operations implementation. We added a loading spinner to the page and now we can clearly see that if a browser hangs, it does for a short period of time. It is time to test it on mobile devices!
We must make a little digression and mention mobile Safari 8 (there is no data for other versions). The picture input works too slow and hangs for a few seconds. It is due to the fact that Safari creates a photo’s copy with a cut-off EXIF or generates a small preview which is shown directly in the input. While it is tolerable and almost unnoticeable for one photo, it can become hell for multiple-choice cases (it depends on the number of photos). And all the time the page is not aware whether photos are chosen or not, as well as whether the file selection dialog is opened at all.
Preparing for the worst we opened a page on the iPhone and chose 20 photos. After a little procrastination, Safari joyfully reported: A problem occurred with this webpage so it was reloaded. Second attempt — the same result. We envy you, our dear readers, the fact that for you the paragraph is just a brief text that will swiftly disappear from you consciousness, while for us it will always be associated with the night of pain and suffering.
So, Safari is not functioning too well. To fix it with the help of developer tools is not possible — there is nothing useful for memory usage here. Full of hope we opened a page in an iOS simulator — functions alright. We looked at the Activity Monitor — well, the memory is growing with each picture and is not being released. At least something. We started experimenting. To understand what experimenting in a simulator means: to see the memory leakage for one picture is not possible. It is still extremely hard to do it for 4-5 images. The best option is to choose 20 items. It is not possible to drag them or select using “shift”. You have to click it 20 times. Once you selected them, you have to look at the task manager and wonder whether the reduction of memory by 50 megabytes is a random fluctuation or you have done something wrong.
To be short, after a long series of trial and error we came to a simple, but very important conclusion: you have to empty everything you leave after yourself – as early as possible, using any available ways, and allocate the resources as late as possible. Relying on garbage collecting is out of question. If you create a canvas, eventually you have to nullify it (make it a size 1×1 pixels). In the case of a image you have to unload it (empty the space relocated) — at the end you have to unload it by assigning src=”about:blank”. Simply deleting it from DOM is not enough. If you open a file using URL.createObjectURL, you have to close using URL.revokeObjectURL.
After an intensive recoding of memory functions http://jsbin.com/pajamo/9/watch?js,output an old iPhone equipped with 512 Mb memory began processing 50 photos and more. Chrome and Opera on Android are now performing better (an unprecedented case). They managed to process 160 20-megapixel photos: slowly, but at least without breaks. It had a good impact on memory usage and desktop browser— IE, Chrome and Safari began consuming steadily no more than 200 megabyte for a tab during their work. Unfortunately, it did not help Firefox — it continues to spend almost a gigabyte for 25 testing images. We can not say anything about mobile Firefox and Dolphin in Android — it is impossible to choose several files there.
Part 4. Conclusion
As you can see, resizing pictures is very difficult, painful and fascinating at the same time. Some sort of Frankenstein: a hideous direct resize is implemented over and over again to get at least some degree of similarity to the original image. At the same time, you have to come around subtle (unwritten) restrictions of different platforms. All the same, a lot of individual combinations of initial and final size lead to a picture that is too blurry or sharp.
Browsers devour a lot of resources like crazy. Nothing is emptying itself automatically – magic does not work. In this sense it is even worse than working with compiled languages, where you doubtless have to empty resources. First of all, in js it is not obvious what should be emptied, secondly, it is not always possible. Nevertheless, pacifying browsers’ appetites of most of the browsers is quite possible.
We intentionally left out the part where we worked with EXIF. Almost all smartphones and cameras take photos from matrix in the same orientation and write down the real orientation in EXIF, that’s why it is important to send this information together with the reduced versions of pictures to a server. Fortunately, JPEG format is pretty simple and in the project we just transplant EXIF section with a source file to a final source even without dealing with it.
All that we learned and tested in the process of writing a resizing program intended for usage before file downloading for Uploadcare widget. The code cited in the article follows the narrative logic better than its final version. A lot of stuff concerning errors processing and browser support is left out. That’s why, if you want to try it yourself, you can find the source files here.
By the way, here is some additional statistics: using this technique, it took less than 2 minutes to load 80 photos reduced to 800×600 size through 3G network from iPhone 5. It would take 26 minutes to load the same original photos. So, obviously, it was worth it.
So, the case study has clearly shown you how much the method of the image resize influences the quality of the images. If you want the best quality webdesign and graphic design, contact Digital Services brilliant team of web developers and designers who can implement the best website design techniques for you to create a high-quality website.
Source: http://digitalservicescompany.com/