I've faced with a seamingly minor problem that, when trying to solve in a generic way, turned out to be more serious than it had seemed.
I have a single-threaded stand-alone Python application that does both UI and engine activities. Both UI- and engine-functions can modify the state of a few shared resources (camera, timer, etc.), which is obviously problematic if they're doing it at the same time.
In my timer callback (i.e. engine) function I use the camera to take a photo. If you've ever looked at the source code of the camera module you could see how it works: the callback-oriented Symbian API is made synchronous by using a CActiveSchedulerWait object, which provides the following useful features:
- Makes the current call synchronous, i.e. blocks it and does not let it proceed as long as the Active Scheduler's AsyncStop() method won't get called.
- While it's blocking the current call, it still lets other Active Objects function.
And it's exactly the second point that causes me head-ache: since it leaves the UI responsive, any of my UI functions can be called (remember: the camera is just taking a photo at this very moment) letting them access shared resources (e.g. deleting camera object that is still being used, deleting timer object that is just to be re-started, etc.).
It has taken a while, while I realized what is the problem. Then I thought that surprisingly (it was surprising to me, since I didn't anticipate such a problem in a single threaded application) I will need to do some synchronization.
The first thing I could think of was to make use of a built-in native Python lock from either thread or threading module. The problem with these classes is that they can be used for inter-thread communication, which is not my case (my app is single-threaded).
Then I thought I could write the lock mechanism on my own. For example:
You can see that the first who calls acquire() will not get blocked as opposed to the rest. They all get blocked while still letting others (Active Objects) work. And it would be a FIFO queue so that block would be released in the same order objects have acquired the lock.
Constructor, initializing data members
self.count = -1
self.lockQ = 
Acquire lock: passes if first request, blocks if not
self.count = self.count + 1
if self.count > 0:
lock = e32.Ao_lock()
self.count = self.count - 1
if self.count > -1:
lock = self.lockQ
This solution failed to work. The problem is that since e32.Ao_lock uses CActiveSchedulerWait itself, although it does allow nesting (i.e. creating and starting new Ao_lock objects while one is already created) it cannot be freed up as long as ALL nested objects are not freed up first. In other words, with this concept I cannot implement a FIFO lock mechanism, just a FILO. And that is naturally right the opposite as what I really want.
Have you ever faced such a problem? How could you solve it? Are you aware of such a lock mechanism that could be used in this scenario (i.e. in a single-threaded context)?
Another option I'm now thinking of is to put engine code in a separate thread so that standard locks can be used between the threads. Though it might seem obvious (hey, I'm from the Symbian world, where it's very rare to use more than one thread), I'm still trying to avoid it.
Thanks for reading this long-long post. Do you still have some energy to suggest me something?