I'm developing a Symbian app that uses the camera of the phone to capture an image and uses image processing to decide if the quality of the image is good enough to match certain criteria. The aim was to do this without making the user press a "capture image" key every time.
In order to get the object to be photographed to be sharp, only a small rectangle in the center of the view is taken to be processed, the rest of the image data (>= 50% of the image data) can be ignored.
As basic application, I took the Camera Example v3.0, my test device is a Nokia N79 (S60 3rd, FP2).
So much for the project, now the problem:
The unsolved problem is the resolution of the image that is to be processed. I tried two different approaches and both failed:
- Using the viewfinder data
I could read in the forum that the viewfinder only produces images that have at max the same resolution as the display:
It seems to me that when a bigger viewfinder resolution is assigned, for example
the viewfinder image is interpolated from i.e. 240x320 to the desired size. As the image processing in my app needs a much bigger image than 240x320 pixels the viewfinder can't be used for this...Code:iViewFinderSize = TSize(1280,960); //... TRAPD(err2,iCameraWrapper->StartViewFinderL(iViewFinderSize));
Am I wrong? Is there a way to get viewfinder images in higher resolutions that are not interpolated?
- Capturing images
To get bigger images I tried capturing an image with my desired resolution. In order not to get to much data my first thought was to use the the format CCamera::EFormatMonochrome, 8bit grayscale. But CCameraEngine::PrepareL leaves with error -5, meaning that this mode is not supported by the phone.
Anybody have any ideas why the mode is not supported?
I had no choice so I used one of the CFbsBitmap modes, CCamera::EFormatFbsBitmapColor4K.
However now, no matter how small the image size is set with for example
it always takes like 5s until the callback Method MceoCapturedBitmapReady() is called.Code:iCaptureSize = TSize(640,480); //... TRAPD(err,iCameraWrapper->PrepareL(iCaptureSize, CCamera::EFormatFbsBitmapColor4K));
Is there a way to find out how the camera sensor works? Does the camera API get the full 5 Megapixels image as a jpeg, decodes it and scales it down to the desired resolution? Could you explain to me why the bitmap capturing process takes so long?
Is there a way to configure the camera's sensor to get take smaller resolution pictures? Similar to a digital camera with 10 megapixels that can be configured to take "Email-size" pictures in resolution 640x480.
As you can see both approaches failed:
- viewfinder: image resolution is too small
- capturing: time to process capturing takes too long
Sorry for writing a small novel to describe the problem but it seems kind of complex to me and I don't have a clue how to go on.
Any help is greatly appreciated. Hope someone can help me
Thanks in advance!