meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

peerhood:environment_listeners [2011/09/02 12:05] (current)
Line 1: Line 1:
 +====== Environment listener framework for PeerHood ======
  
 +This page describes the meaning and the structure of listener addition for PeerHood.
 +
 +===== Listener - purpose and definition =====
 +
 +
 +Listener in PeerHood can be defined as a dynamically loadable internal component that connects itself to certain event source in the execution environment and receives or requests status changes of certain device in the environment or environment itself. Based on these status changes the listener then alters the state of corresponding PeerHood component.
 +
 +The purpose of using different listeners is to get more specific state information about the underlying system, this state information can be then used to modify PeerHood functionality in run time. This way PeerHood can notice when certain networking device goes offline or is removed or when the underlying system is going to be shut down. The main reason for using different system level event listeners is to increase the error tolerance of PeerHood and make the operation of PeerHood more dynamic and context aware. Error tolerance is increased by reacting to possible state changes of the environment appropriately,​ i.e. some components can be made idle if there is no device present for that particular component. Context awareness and the dynamics of PeerHood is increased by delivering more detailed information about the events in the system level context through event listeners. The approach is similar to Observer pattern but on the system level where the event sources of the environment act as subjects and the Listener component in PeerHood acts as observer. The observer gets notifications of status changes of the subject and reacts to changes accordingly.
 +
 +
 +
 +
 +===== Changes to current PeerHood implementation =====
 +
 +
 +The usage of the listeners also required some changes to components that are going to use listeners. Daemon and networking plugins are the main users of listeners (in the future there could be more), as said, a new interface was made to make it possible that owners of the listeners could be used in similar fashion (from the listeners point of view). The interface provides means to change the active state of the owner and also to trigger shutdown mechanisms, the component that implements this interface must also correctly react to changes made by the listener.
 +
 +When the state of the daemon is being set to passive it will stop all networking plugins, the passive state of the daemon should be set only when the device where PeerHood is running is being set to offline mode and no networking is enabled. When plugin is stopped it should only stop its activities, listeners should stay active. After the device is set to online mode the state of the daemon should be changed back to active, when set back to active mode the daemon will only start the plugins, nothing will be reloaded. In case of a shutdown signal from the device the daemon should first stop all activities of plugins and destroy them (plugins should deal with the listeners that it requested) and after that it removes the system listeners it requested. Currently there is no method to inform applications that use PeerHood about the daemon state changes, this should be implemented though.
 +
 +When a listener detects that monitored device has gone down it is needed to stop all of its activities and wait for the device to come up. Currently this is implemented in a way that threads will "​reset"​ themselves by leaving the running loop and calling the function that is executed in that thread. Each function will continuously loop a sleeping function until the device comes up again, after this the operation of that thread continues as before. By using this kind of approach the overhead caused by new thread creation is eliminated when the thread is being reused. In the long run this approach might generate some overhead (need to research about this) if devices are added and removed constantly since the executed function is called multiple times. This also allows the threads of the plugins to maintain themselves without the need for an external maintainer (daemon) since the listeners of that plugin will stay active when the device itself goes offline. ​
 +
 +===== Listener framework =====
 +
 +Current listener framework is implemented in a way that every listener is a dynamically loadable module located in the same folder as other plugins, listener modules will be loaded at the same time as other PeerHood plugins. Only the creator class of the listener is created when loading the plugin to avoid unnecessary memory consumption if the listener is not eventually needed. Listener plugins are created via factory class (ListenerFactory - singleton protected) that can be accessed by any class, listeners are requested from the factory with a certain string type and listeners are created if the given type is a match to defined type. Every listener should automatically register (in construction phase) to the component that requested it. Any class that is going to use listeners will have to implement a specific interface (that will be expanded to meet all requirements) that is used only by the actual listener to register itself to owner object and also to modify owners state. One class can have multiple listeners, factory creates all listeners that match the given criteria.
 +
 +==== General structure ====
 +
 +The general structure of listener framework addition for PeerHood is shown in following picture as class diagram without actual listener implementations. New components are shown in orange color.
 +
 +The concrete implementations of daemon and plugins own the listeners that they request through ListenerFactory,​ the concrete listeners just know about their owner. Through this reference the listeners can alter the state of the owner by using MAbstractStateConverter -interface. And on the other way around the owners call their listeners to check their sources via MAbstractListener interface, currently two main functions can be called in addition to connect and disconnect functions. More about these interfaces and functions are explained in the next chapter.
 +\\
 +\\
 +{{:​peerhood:​peerhood_addition_-_listener_framework.png|}}
 +\\
 +
 +==== Components and interfaces ====
 +
 +=== ListenerFactory ===
 +
 +ListenerFactory is a storage for listener creator -objects, it is protected with singleton instance (//in the end, there can be only one.//). It is used for creating actual listener objects based on type name (or a prototype name if preferred). Creates ALL listeners that can be created for the given type. Doesn'​t do anything else but calls for listener creators to create objects, i.e. acts as an errand boy, doesn'​t take part in registering created listeners.
 +
 +== Public interface ==
 +
 +''​static ListenerFactory* GetInstance()''​
 +  * Get the listener factory reference, listener factory can be used only via this.
 +\\
 +
 +''​void Register(MAbstractListenerCreator* aCreator)''​
 +  * Register a new listener creator object into factory
 +  * Used only by listener creator objects (the most convenient way is to call this via ListenerFactory::​GetInstance() when listener creator object is created).
 +\\
 +
 +''​int CreateListeners(const std::​string&​ aName, MAbstractStateConverter* aConverter)''​
 +  * Create listeners with ''​aName''​ for object ''​aConverter''​
 +  * Used by objects that implement ''​MAbstractStateConverter''​ interface. CANNOT BE NULL!
 +  * Returns the amount of listeners that were created.
 +\\
 +
 +=== MAbstractStateConverter interface ===
 +
 +Interface for objects that are going to use listeners, this will be passed to ''​ListenerFactory''​ when requesting listeners for certain type, the reference is again passed to listeners in order to register the listener for its owner. The owner objects state is also changed by using functions in this interface. Currently this is implemented by daemon and plugins (plugin-interface inherits this interface). This interface should be expanded based on required functionality,​ e.g. power saving mode.
 +
 +== Functions ==
 +''​void RegisterListener(MAbstractListener* aListener)''​
 +  * Used for registering a listener to object that implements this interface.
 +  * Used only by listeners that are created by the request sent by the object implementing this interface.
 +\\
 +
 +''​void SetState(bool aActive)''​
 +  * Change state of the object implementing this interface
 +  * At the moment the options are binary, i.e. active and passive. Active = owner working normally, passive = owner sleeping or passively monitoring.
 +\\
 +
 +''​void TriggerShutdown()''​
 +  * Notify owner that shutdown is required (the device is going to shut down)
 +\\
 +
 +=== MAbstractListenerCreator interface===
 +
 +Interface for listener creator objects, every implementing listener creator should return a reference to created listener when called through this interface.
 +
 +== Functions ==
 +''​MAbstractListener* CreateListener(const std::​string&​ aName, MAbstractStateConverter* aConverter)''​
 +  * A listener connected to this listener creator object is created via this function. Creates object if the given prototype name ''​aName''​ corresponds to the __hardcoded__ type name.
 +  * ALWAYS creates a new listener
 +    * It might be good to protect listeners with singleton and just add one to multiple sources
 +    * On the other hand this might generate concurrent usage issues!
 +  * Should return an instance to created listener or NULL if the prototype name was wrong
 +\\
 +
 +=== MAbstractListener interface ====
 +
 +Interface for device / environment listeners to implement. The listener for certain device is used only by its owner through this interface at the moment. The constructor of a listener is always called only by listener creator! It is necessary to pass the reference to ''​MAbstractStateConverter''​ to created listener in order to register the listener into its owner by calling the ''​RegisterListener(MAbstractListener*)''​ method. A good way to register is to call the registering function in the constructor of listener.
 +
 +== Functions ==
 +''​bool Connect()''​
 +  * Connects the listener to its source.
 +  * Should be called by owner after listener is created!
 +  * Should return true when success.
 +\\
 +
 +''​void Disconnect()''​
 +  * Disconnect the listener from its source.
 +  * Call before destroying object / when owner is being destroyed.
 +\\
 +
 +''​void CheckInitialState()''​
 +  * Check the initial state of the monitored device / environment.
 +  * Poll the current state from the source.
 +  * Should change the state of owner accordingly (i.e. when device not present, set parent to not active state)
 +\\
 +
 +''​void CheckState()''​
 +  * Check the current state of the monitored device / environment.
 +  * Check message queues for new messages or use polling.
 +  * Should change the state of owner accordingly (i.e. when device not present, set parent to not active state)
 +\\
 +
 +''​const std::​string&​ GetName()''​
 +  * Retrieve the name of the listener.
 +\\
 +
 +==== Initialization of listeners in PeerHood ====
 +
 +Following image shows the initialization order and procedure of listeners. The creators of listeners are owned by ListenerFactory and the actual listeners are owned by that component that has requested listeners. Currently each request with a certain type creates new listener objects for that requesting component. ​
 +
 +{{:​peerhood:​peerhood_addition_-_listeners_-_initialization.png?​direct|}}
 +
 +
 +=== Initialization order ===
 +
 +1: PeerHood internal initializations and procedures when starting.
 +
 +2: Daemon starts the initialization procedure of plugins and listeners.
 +
 +3: Daemon loads the plugins from the specified plugin directory, all plugins ending with ''​plugin.so''​ are loaded as dynamic libraries. ​
 +
 +3.1: Daemon loads the WLAN networking plugin as dynamic library
 +
 +3.2: Daemon loads the listener as dynamic library, only the creator part of a certain listener will be loaded into memory in order to reduce unnecessary memory usage.
 +
 +3.3: The creator of a listener registers itself into ListenerFactory in order to enable the creation of that listener.
 +
 +4: Daemon orders loaded plugins to perform necessary setup procedures (load listeners)
 +
 +4.1: Daemon calls LoadListeners() function of every plugin.
 +
 +4.1.1: Plugin requests listeners from ListenerFactory with given type (aName) and giving a reference to self that will be passed to the possibly created listener object.
 +
 +4.1.1.1: The ListenerFactory asks every registered listener creator to produce a listener with given type. Listener creator returns either an reference to created listener object or NULL if the type name does not match with the listener type name. Every created listener will have to register to its owner!
 +
 +4.1.1.2: If the given type name matches the creator creates a new listener object and passes the reference to owner (the object that called the ListenerFactory).
 +
 +4.1.1.3: When created the listener calls the RegisterListener() -function of MAbstractStateConverter interface that is implemented in owner class, this adds the newly created listener for that object that was passed as reference to listener.
 +
 +4.1.2: Before listeners can be used the owner must call them to connect to their sources. This is done via Connect() -function of AbstractListener interface.
 +
 +4.1.3: To get the most recent information of the used networking device (or the platform in case of daemon) the initial state of the device should be checked via listener.
 +
 +4.1.3.1: Listener checks the state of the used device and changes the activity state of the owner object accordingly.
 +
 +5: The procedure is similar to 4.1.1, only different type name is given (and reference to owner).
 +
 +6: The procedure is similar to procedure presented in 4.1.2 to 4.1.3.1.
 +
 +
 +==== Current use of listeners ====
 +
 +Following image shows the current usage of the listeners in brief. The whole process of checking the messages received by listeners is controlled by daemon. First the daemon calls for its system listeners that monitor the environment to check for messages and reacts to changes that are made by the listener. After this the daemon goes through all networking plugins and requests them to update their states which means that the networking plugins will call all their listeners in similar fashion as the daemon did for own listeners. These operations are conducted inside the main loop of the daemon, if there are multiple plugins and multiple listeners for each plugin (and daemon too) the performance of the daemon might degrade, applications running on top of PeerHood might suffer from this.
 +
 +{{:​peerhood:​peerhood_addition_-_listener_framework_-_listener_usage.png|}}
 +
 +1: The daemon executes the main thread in its run() function.
 +
 +1.1: On every loop of the running thread daemon calls its system listeners to check for state updates via AbstractListener interface. Procedure should always call the CheckState() function of AbstractListener interface of every registered listener.
 +
 +1.1.1: The listener that is being called checks the state from the source it has been connected to (by polling the source or checking message queue depending on the implementation). Listener changes the state of the daemon accordingly via MAbstractStateConverter interface functions (SetActive(bool) or TriggerShutdown()). ​
 +
 +1.2: After checking the state of the system daemon asks networking plugins to check for updates in their states. Daemon uses the UpdateState() function of MAbstractPlugin interface. The procedure flow is similar as in daemon.
 +
 +1.2.1 Networking plugin calls the CheckState() function of the MAbstractListener interface of every registered listener.
 +
 +1.2.2 Listener checks the state of the networking adapter by checking for messages in the queue or polling the source (implementation dependant) and modifies the functionality of the owner via MAbstractStateConverter interface.
 +
 +===== Implemented listeners =====
 +
 +Currently there are four different listeners implemented,​ these listeners use D-Bus via private connections,​ i.e. they all register to listen for certain source(s). This solution allows to build quite robust listeners since they only have to react when there is a message in the queue, nothing will be done if the queue is empty. Currently the listeners check the first message from the queue - one message at one call in order to consume execution time as little as possible. Since the D-Bus is built with principle “what you want is what you get”, i.e. you get only messages from the interfaces you register to there is no additional overhead caused by generic messages that are not connected to the actual actions of that listener.
 +
 +{{:​peerhood:​peerhood_addition_-_listeners_framework_-_current_implementation.png|}}
 +
 +==== CBluezBTListener ====
 +
 +This listener connects to Bluez hcid daemon via Desktop-Bus (D-Bus). The purpose of this listener is to check and monitor the state of the default Bluetooth adapter. This listener responds to bt type name and should be used by Bluetooth plugin only. This listener depends on Linux Bluetooth protocol stack (''​bluetooth''​ library) and requires that Bluetooth development headers (''​libbluetooth-dev''​) and D-Bus development headers for GLib (''​dbus-glib-1-dev''​) are installed. There is no development header for using Bluez hcid via D-Bus, i.e. when the API specification changes this listener might need some changes too. 
 +
 +When started this listener connects itself into D-Bus by using a private connection that is used for sending method calls and receiving signals. This listener registers itself to listen for signals emitted by ''​org.bluez.Manager''​ interface.
 +
 +When initial check is requested (CheckInitialState) it first requests (method call) the default adapter from hcid via D-Bus and uses the received interface to check for the mode of the adapter in use. The state of the Bluetooth plugin is changed to inactive (passive) when the default adapter is disabled (adapter present but set to off), other modes will result in active state of the Bluetooth plugin. If there is no Bluetooth adapter present the Bluetooth plugin will be set to inactive mode.
 +
 +After calling the status check function following signals are reacted to if received via D-Bus:
 +  * Adapter added -> set owner to active
 +  * Default adapter changed -> no action, only debug message at the moment.
 +  * Adapter removed -> set owner to inactive
 +
 +==== CMaemoBTListener ====
 +
 +This listener uses the Bluetooth connectivity D-Bus API of Maemo environment,​ uses btcond daemon of Maemo environment and should be used by Bluetooth plugin only. The purpose of this listener is to monitor the state of the Bluetooth adapter in maemo environment,​ different source is used than in ''​BluezBTListener''​ since in Maemo the Bluetooth adapter cannot be not removed, it can be only disabled. This plugin depends on Maemo environment and Linux Bluetooth protocol stack that is installed by default in Maemo environment,​ also the mentioned Bluetooth connectivity D-Bus API is required. Since a development header is used this listener should not require changes when the contents of API is changed.
 +
 +When started this listener connects itself to btcond via D-Bus by using a private D-Bus connection. This listener registers itself to listen for signals from ''​
 +com.nokia.btcond.signal''​.
 +
 +The initial state of the Bluetooth adapter cannot be checked with this listener, btcond provides no direct method for requesting the current state of the adapter. It is best to check the initial state of the adapter with ''​BluezBTListener''​.
 +
 +When status check is called, following signals are reacted to if the interface of the device is correct (e.g. ''​hci0''​ in case of Nokia N810 Internet tablet):
 +  * Device up -> set owner to active
 +  * Device down -> set owner to passive
 +
 +==== CMaemoSystemListener ====
 +
 +This listener uses the "Mode Control Entity"​ (mce) and "​Battery Monitor Entity"​ (bme) services via D-Bus API of Maemo environment to listen for changes of the device. The listener is designed for daemon only (responds to "​daemon"​ type) and can be used only in Maemo environment since it heavily depends on a services located only in Maemo. There are headers provided for mce that are used but there exists no headers for bme. Mce provides information of the general status of the device (online, offline, shutdown and overheating) and bme provides information about the battery state (battery low, charging etc.).
 +
 +This listener connects to D-Bus via private connection for receiving signals and sending method calls. Listener is registered to listen signals from ''​com.nokia.mce.signal''​ (mce) and ''​com.nokia.bme.signal''​ (bme) interfaces.
 +
 +When initial check is requested a method call is sent to mce interface (''​com.nokia.mce.request''​) to get the current state of the device. Currently only two states are recognized, if device is in ''​ONLINE''​ mode the owner is set to active, otherwise passive (the ''​OFFLINE''​ mode is unsupported at the moment as said in the mce header). Battery status is not requested during initial check (this could be included..).
 +
 +When the status check is called listener checks the message queue and reacts to following signals from mce:
 +  * Device mode change:
 +    * Normal mode -> set daemon to active
 +    * Flight mode (networking offline) -> set daemon to inactive
 +  * Device is shutting down -> set daemon to shut down 
 +  * Device has overheated, shutting down -> set daemon to shut down
 +And to signals from bme:
 +  * Battery low -> set daemon to passive
 +  * Charger connected -> Internal state change, wait for battery charging signal to change daemon state
 +  * Battery charging -> set daemon to active (charger connected is received before this)
 +
 +
 +==== CMaemoWLANListener ====
 +
 +WLAN listener for Maemo uses Internet Connectivity daemon (icd) via D-Bus API. This listener is meant to be used by WLAN plugin only in Maemo environment since it depends on service located in Maemo environment only. A header for icd is used (development header) to get the signal and interface definitions. Icd can provide information about the connectivity of the WLAN adapter.
 +
 +This listener connects to D-Bus and uses private message bus for connections. Listener is registered to ''​com.nokia.icd''​ interface.
 +
 +When the initial state check is requested listener executes a method call requesting the current status of the adapter. The method call returns an integer with value 1 if the device is on and 0 if offline, the state of owner (WLAN plugin) is changed accordingly.
 +
 +When the status check is requested listener reacts to status change messages, if the message is related to scans WLAN adapter has made (icd sends information about these via D-Bus) the message is being discarded, following signals are reacted (as string parameters):​
 +  * ''​CONNECTED''​ -> set WLAN plugin to active
 +  * ''​DISCONNECTING''​ -> set WLAN plugin to inactive
 +
 +===== Issues =====
 +
 +==== Private vs. shared D-Bus connection ====
 +
 +As previously mentioned current listeners use D-Bus via private connections. This approach makes it possible to consume messages in more robust way since only certain messages arrive to certain listener from certain sources, the first message can be always taken off from the queue. This approach creates multiple connections to D-Bus and also requires multiple message queues, one for each connection. On the other hand there is no delay in the message delivery and consumption because a listener does not have to wait for other components to consume the messages in the queue. This is the approach that is currently being used in PeerHood listeners.
 +
 +If one shared D-Bus connection (session bus) was used among all listeners the type and source of the first message in the queue should be always checked before taking it off from the queue. This would result in longer delays before the message related to certain listener reaches the actual owner since the queue might be filled with messages belonging to other components. The worst situation would be that one component registers to a interface that repeatedly sends messages through D-Bus and if a component consumes only one message with one call other components might get the information too late (device has been offline for some while and the owner component tried to use that device which resulted in errors).
 +
 +Another way to solve this problem would be that every listener takes the first message from the external queue into its internal message list, the advantage in this would be that since the external queue provides only the topmost message the internal message list could be iterated through at any time by any listener. This would then reduce possible delays and make it sure that messages are delivered to all components that would need them. On the other hand this would require a lot of non-direct interaction between listeners, every listener would be required to mark messages as read after processing it, if some listener doesn'​t do this the list would continuously grow eventually resulting in very long processing times and vast space requirements of the list. In ideal situation the maximum size of the list would be exactly the same as the amount of listeners present if every listener takes one message from the external queue into internal message list during their status checks and marks every message it has processed as read. This approach would require more memory than current solution, every message would require some meta data about the listeners that have processed it, it would be also required to have some kind of database of current listeners in order to check that has every available listener processed the message so it could be removed from the queue. There could be an external process controlling this list and message markings but it might be a waste of available resources, the listeners could manage the list themselves.  ​
 +
 +The current solution has its advantages over these two other scenarios, every listener is responsible of its own message queue, queues are kept short at all times and the delay depends only on the daemon operations. Since PeerHood daemon doesn'​t contain any procedures that require intensive processing the delays should stay fairly low. The delays could be lowered with shared connection that uses internal list for messages, this would reduce the amount of connections used and reduce the load from the underlying system but it would rely on the fact that every listener must work correctly with internal message list. This problem could be solved with connection specific stubs for listeners, almost everything related to the underlying connection technique used by listeners could be defined in one abstract stub class and actual listeners then only implement the required methods related to processing of the signals or execution of the method calls in order to follow the system state changes. By using this kind of approach all listeners would act in similar fashion and it could be trusted that every listener marks the processed message as read. 
 +
 +==== Register to source vs. poll source vs. callbacks ====
 +
 +Registering to a source is a procedure where the listener connects to some event source and adds itself to a source for receiving event messages (signals). In the case of PeerHood listeners D-Bus is used as medium to deliver messages and provide registration to source. The component that registered to listen event messages has a queue for incoming messages and the component is required to periodically check this message queue. The queue has messages only when some event has happened, this is a fairly robust method but might cause some delay between event dispatch and event handling, for example when the events are going to happen rapidly and the message queue grows. One solution would be that the component handles all messages that are in queue at once but this might take some execution time from other components if there are lots of events. There could be a limit for messages to handle based on the event activity and processing time that is allowed for to be consumed by that component.
 +
 +Polling a source means that the current state is repeatedly requested from the event source, when D-Bus is used as medium the polling is executed with method calls via D-Bus connection. When this kind of approach is used it is necessary to limit the amounts of polls within certain time lines since it is highly unlikely that device or environment related event changes happen multiple times within e.g. one second. The active polling might reduce the performance of the application or even the whole system since every unnecessary call creates unnecessary overhead and takes some execution time in the both ends; the polling component and the event source. The polling component must execute a request to source, source must process the request and meanwhile the polling component just waits for the reply and cannot do anything else during that time, only after receiving the response the component can continue its execution. In case of PeerHood active polling would require additional processing time for the listener and this time is taken off from daemons time which might then reduce the performance. On the other hand D-Bus supports also method calls that aren't blocking, i.e. there is no need to wait for a response, the response would then be added to the message queue of the component and it can be processed afterwards when it arrives.
 +
 +Callback approach is a little bit similar to source registration since the component is required to register to event source but instead of using a message queue for receiving event changes a method of this component is registered to be executed when an event happens. When D-Bus is used it would require that glib main loop is used as the main loop of the running thread since it is required to transfer the execution to the event handler function when the event happens. This would be quite ideal solution and it has been proven to be very effective (reference?​) but on the other hand when the execution is transferred away from PeerHood daemon it might cripple the performance if there are multiple events happening at once or multiple events in a row for a long time that withhold large structures of data that has to be processed by that event handler function. Another disadvantage is that current implementation of PeerHood would then need some changes to its structure and at this point that is not applicable.
 +
 +The polling method in general is not very effective, it consumes a lot of resources (reference?​) when comparing it to other possible approaches. The presented method that sends a non-blocking call could suit better for many purposes but there is a downside to this too because every method call has a some kind of return value that has to be processed. Callback approach has its advantages, the events are processed right after the event has happened so there would be no delays and the application,​ or in this case PeerHood would have the most recent information of its environment. On the other hand the first approach, signal registration is quite efficient and provides a good balance between execution time requirement and event processing delays. Events aren't necessarily processed right after they happen but since there aren't multiple different components in PeerHood the delay might not be a problem since there aren't many different networking technologies embedded in one device. As the current usage was presented in picture shown in section [[environment_listeners#​current_use_of_listeners|current use of listeners]] the processing time of daemon is consumed also by the listener checking and the message processing of the listeners. In addition to these operations the daemon must also be able to handle requests coming from applications that are built on top of it. Most important is that the listeners would not take too much processing time and the delays of message processing (between event and processing of event message) stays low. For current implementation of PeerHood the first method presented, register to sources fits best since there are no unnecessary messages being requested or sent and the execution of the daemon is not disturbed in any way. In addition no unnecessary method calls are being sent, PeerHood only acts as a receiving end in this scenario and consumes the messages (via listeners) that are being sent to it based on registration information in D-Bus daemon.
 +
 +==== Listener creation and listener reactions====
 +
 +As said there is no method applied by the framework to prevent creation of multiple instances of one listener, this kind of issue shouldn'​t depend on the framework. This functionality could be added to certain listeners if needed: use Singleton pattern and only register the listener to new component that requests the listener, it would also require that the listener would be able to handle multiple owner components.
 +
 +This introduces a problem: does the listener update the states of all owners at one time when one owner asks it to check the sources or does it change the status of the owner that is calling?
 +
 +First scenario would be better from the owner components point of view and it would also make the operation more efficient: if three components are owning one listener (i.e. one listener is registered to multiple owners) and each one of them calls the listener periodically to check state of the device. This would result in that the messages are consumed 3 times faster (if there are state changes) if a solution based on message queue is used (register to certain source and wait for event messages and store them in a queue). This approach would make the states of the owners to more closely correspond to the actual state of the device. On the other hand if the listener uses polling to get the most recent state of the device then this approach would result in more unnecessary calls that would create unnecessary overhead. In this case timers could be used to limit the amount of actual polls, the listener itself should then store the result of most recent poll and react to the request made by the owner based on the most recent result instead of making a new poll. 
 +
 +Second scenario would depend on storing the most recent result of consumed message or polling of the source in the listener and changing the state of the owner when it requests the status check. To reduce the amount of unnecessary calls and message checks the recent results should be saved in queues in order to deliver correct information to owner components. This would add a lot of overhead and complexity to the operation of a listener, in addition to source checking (poll or message queue check) with every status update request (CheckState()) listener would have to check the queue of previous changes for that component that haven'​t yet been applied. This might reduce the calls made by the listener but would add other tasks for listener. For example with message queue based solution (D-Bus) the external queue provided by source could get new message between every call that has been made (or more often, then the external queue would always have a message) then the internal queue used for message saving is size of **n** when processed and **n-1** after processing, where **n** is the amount of owners. It is expected that every listener takes one message from the external queue and puts it into internal message list. The list size wouldn'​t be a problem since it is on responsibility of the listener to mark messages as read in order to remove consumed messages from the internal list, the listener should know how many components are sharing the ownership.