In LWUIT 1.4 we finally introduced EncodedImage which to us was a complete epiphany in how images "should" work in mobile devices. We now also have an even newer approach to SVG, Multi-Image & Timeline. And I won't even go into LWUIT4IO which has the FileEncodedImage and FileEncodedImageAsync options...
I will now try to explain the pros/cons and logic behind every image type and how its created:
- Loaded Image - this is the basic image you get when loading an image from the jar or network using Image.createImage(String)/Image.createImage(InputStream)/Image.createImage(byte[], int, int).
In MIDP calling getGraphics() on an image like this will throw an exception (its immutable in MIDP terms), this is true for almost all other images as well. This is strictly a MIDP restriction and might not apply for all platforms.
The image is encoded based on device logic and should be reasonably efficient. - RGB Image (internal) - close cousin of the loaded image. This image is created using the method Image.createImage(int[], int, int) and receives ARGB data forming the image. It is usually (although not always) a high color image. Its more efficient than the LWUIT RGB image but can't be modified, at least not on the pixel level.
- RGBImage (LWUIT) - constructed via the RGBImage class constructors this image is effectively an ARGB array that can be drawn by LWUIT. On many platforms this is quite inefficient but for some pixel level manipulations there is just no other way.
- IndexedImage/StaticAnimation - Both are deprecated and replaced by the more efficient and effective EncodedImage/Timeline. IndexedImage's allow storing images using a palette array and a byte per pixel (indexed images must not contain more than 256 colors). Static animations add to that by animating frames using a line difference algorithm.
- EncodedImage - created via the encoded image static methods, the encoded image is effectively a loaded image that is "hidden". When creating an encoded image only the PNG (or jpeg etc.) is loaded to an array in RAM. Normally such images are very small relatively so they can be kept in memory without much effect. When image information is needed (e.g. pixels, dimension etc.) the image is decoded into RAM and kept in a weak/sort reference.
This allows the image to be cached for performance and allows the garbage collector to reclaim it when the memory becomes scarce.
Encoded image is not final and can be derived to produce complex image fetching strategies such as lazily loading an image from the filesystem. - SVG - SVG's can be loaded directly via Image.createSVG() if Image.isSVGSupported() returns true. When adding SVG's via the resource editor fallback images are produced for devices that do not support SVG.
- Multi-Image - This is seamless to developers who receive a multi-image as an EncodedImage. In the resource editor one can add several images based on the DPI of the device (one of several predefined family ranges). When loading the resource file irrelevant images are skipped thus saving the additional memory.
Multi-images are ideal for icons or small artifacts that are hard to scale properly. They are not meant to replace things such as 9-image borders etc. since adapting them to every resolution or to device rotation isn't practical. - Timeline - Timeline's allow rudimentary animation and enable GIF importing using the resource editor. Effectively a timeline is a set of images that can be moved rotated, scaled & blended to provide interesting animation effects. It can be created manually using the Timeline class.
All image types are mostly seamless to use and will just work with drawImage and various image related image API's for the most part with caveats on performance etc. For animation images the code must invoke images animate() method (this is done automatically by LWUIT when placing the image as a background or as an icon! You only need to do it for drawImage code).
All images might also be animated in theory e.g. my gif implementation returned animated gifs from the standard Loaded Image methods and this worked pretty seamlessly (since Icons's and backgrounds just work). To find out if an image is animated you need to use the isAnimation() method, currently SVG images are animated in MIDP but most of our ports don't support GIF animations by default (although it should be easy to add to some of them).
Performance and memory wise you should read the above carefully and be aware of the image types you use. The resource editor tries to conserve memory and be "clever" by using only encoded images, while these are great for low memory they are not as efficient as loaded images in terms of speed. Also when scaled these images have much larger overhead since they need to be converted to RGB, scaled and then a new image is created. Keeping all these things in mind when optimizing a specific UI is very important.
I did not try it yet but would it be possible to design the user interface elements in SVG, then render them to different sizes using Batik library and finally import them into the resource editor using an Ant script and the ant task functionality. But importing an image as a Multiimage? The goal would be to design the user interface elements in SVG and use the multiimage on devices without SVG support. The whole process from SVG to resource file scripted. Would that make sense or do you see a problem? (forgive my ignorance, but I prefer to code and automate theming rather than mess with the gimp,photoshop,inkscape,.. and then assemple the pieces on each change.)
ReplyDeleteThe resource editor uses batik for the fallback image generation in SVG so effectively creating an SVG creates a multi-image as a fallback if one is defined.
ReplyDeleteUnfortunately I haven't added support for all of this to the Ant task mostly due to lack of time.
The SVG for UI approach is slightly problematic on several fronts:
1. Most platforms don't support SVG (e.g. Android, RIM prior 5.0).
2. Those that do often support it badly (very slow rendering, only accepting one SVG file for rendering at any given time etc.).
3. That sort of approach implies scaling.
Using a 9 border provides the same functionality for a cheap cost. The problem is that automating a 9 cut is very difficult.
I'm trying to make the resource editor into a tool both hackers and designers will be happy with (delicate balance), e.g. the image border wizard would ideally not require any photoshop or other tools and with ready made templates you should have a pretty good starting point for a theme.
Hey Shai!
ReplyDeleteThank you so much for the great framework!
I have a problem: i want to rotate an image like an animation but not as stataic, the rotation is depending on the user input.
My Problem is that Image.rotate() is slow. Do you have an advice how to make this faster?
Thank you in advance!
Hi Shai Almog,
ReplyDeleteI want to get the byte[] value of a LWUIT Image object. How to achieve that ?
To get an image from the res file just call res.getImage(String) on the resource file and you would have the Image object.
ReplyDelete