At the early days of the Internet, routing took into account the number of hops an as important metric for path selection. This was a wise decision at the time as most of the Internet was still homogenous in terms of its links, router capacity and traffic. Soon later, weights were associated to links giving autonomous systems a new criterion to decide on the best routes and a mean to engineer their traffic and balance this. The creation of labels by MPLS provided a similar traffic engineering mechanism capable of controlling and improving the routing and service delivery through path selection.
In the new context of 4G networks, routers must deal with different dynamic link stability levels, security and QoS levels and network handoff It is for such reasons that the 4G networks will certainly need to also consider new routing metrics and change these according to their environment or context. Under some scenarios, reachability could be more important than performance whereas QoS may become the metric of choice in other circumstances. This may also be service and application driven. Electronic mail transfer is a store and forward application that requires information integrity mainly whereas video conferencing considers low delay and bandwidth as primordial network resources.
Future 4G will certainly embrace disruptive connections and delay tolerant networks, high mobility users resulting in a challenging mix with different routing metaphors and techniques thriving within a single 4G unifying architectures. A simple, "one hat fits all" approach to routing cannot be the way forward. Therefore 4G needs to consider multi-metric optimization following different innovative routing approaches instead of merely reusing traditional strategies. It is believed that new routing and resource management insights borrowed from areas as diverse as biology, social phenomena, random and probabilistic diffusion models are expected to lead the way ahead.
But this is nonetheless not a complete breakaway from routing as we know it. In fact one expects to continue making use of useful traditional concepts such as clustering and hierarchical structures to simplify, organize and improve 4G routing. Following a dynamic approach, social algorithms exchange messages to find popular nodes and establish similarity among them in order to create clusters and hierarchical structures. Similarly to traditional routing algorithms from fixed networks, messages can be forwarded from any social node to a popular one judged to be in a better position to disseminate the information and capable of increasing the probability of a message reaching its destination. This offers ways to increase the delivery rate, but differently from the flooding algorithm, the social strategy reduce s the number of message replication as these messages only are forwarding among a restricted number of nodes. Moreover, approaches such as SOLAR and (Leguay, Friedman & Conan, 2006) work by extracting location information to identify mobility patterns in order to improve their routing efficiency. They rely therefore on the understanding of user's behaviors in terms of mobility patterns 4G networks are expected to collaborate with each other independently of their underlying technologies. For example, a user with Bluetooth and GPRS devices can choose one or another technology to disseminate a given type of information according to application level criteria such as urgency and destination distance. One could use an epidemic algorithm to send a simple message to a friend through a Bluetooth interface while selecting a GPRS interface to transfer credit (possible future money) to a distant family member.

Advanced Scheduling Schemes and Resource Allocation

Although the fourth generation is homogeneous under the IP umbrella, the participating 4G devices may have multiple interfaces and radios. A 4G device could bind each one of its interfaces to a distinct network, or use multiple interfaces to access a single network. With these features it will be possible to increase traffic bandwidth using link aggregation or execute hand-off without latency and instability, and maximize processing time available for functions such as data-error correction. Hence future routing protocols cannot assume the presence of static links between device interfaces and networks. They must deal with these heterogeneities under the IP layer and dynamic binding. A new class of challenges emerges, mainly in terms of QoS guarantees. So, how could the user achieve good acceptable performance when using several interfaces submitted to varying working conditions seen at several networks?
Mobile wireless devices often need to maintain data or voice communication across different access points and radio base stations. This process is known as hand-off Current cellular systems implement handoff over a single interface and only for phone calls. However, the next generation wireless system (4G) supports seamless handoff for data traffic and should be able to manage radio resources efficiently. VoIP continuity is another requirement in LTE especially when using the 3GPP IP Multimedia Service (IMS). In such a multi-radio environment, there is space to optimize bandwidth radio resources usage, signal quality and reduce information loss.
An important role for nontraditional routing approaches to play in improving future 4G communication systems is foreseen. The Wasp model has shown to be able to schedule tasks and reallocate resources, following a hierarchical and threshold based approach. Each wasp is stimulated to execute its task when its variable value becomes bellow a given threshold. We have seen this model being applied in the dynamic routing of vehicles, a prominent component of future 4G networks. Here, each vehicle is seen as a wasp with a threshold that waits before finding new optimized paths. When a node receives a request from two or more wasps with the same threshold, it then reserves the necessary resources and improves the routing path to the wasp with the highest hierarchy.
The wasp model may be used to resolve another important problem: that of interface selection. The individual force variation (F) is used for decision making, and to determine the best interface to use. This same wasp characteristic has also been associated to model signal strength, stability, efficiency and power consumption.
Since the binding between network and interface could be seen as a task, then wasp routing (Song, Hu, Tian & Xu, 2005) could also be a good approach to improve routing performance. Moreover, this scheme could also be used to manage and improve robustness by allocating messages to different networks when some paths may become unreachable.


The 4G routing algorithms presented in this Chapter are evaluated in terms of their delivery success rate, level of message overhead and delay they incur. One or more scenarios are defined according to some parameters chosen mostly as fixed variables or a combination of factors (various values). The selected scenarios depict a number of situations such future networks may operate in. Not only are these metrics going to tell the reader more about the relationship between some existent and future routing approaches for 4G, but these are also expected to give some helpful and explicit insights into the understanding of routing issues and solutions in the new 4G context. As a result, the simulations are going to span a number of configurations in an attempt to identify and separate those states that offer better performance while reducing overhead and resource requirements. For instance, some specific factor value may be set to induce chemotaxy, stigmergy, diffusion or percolation, used in biological, social and physics modeling.
One objective is to show how percolation stimulus may be discovered for a given 4G scenario. The next step is then to determine the routing improvement a network engineer may obtain once this understands and knows how to handle the stimulus. Given that this is a case study, public domain software and solutions will be used whenever possible to allow the reader to repeat parts of the study. The algorithms and simulation tools may be openly obtained from the Web as well as information on the traffic and topology models used in the scenarios. With this in mind, the OMNet simulator, the PRoPHET algorithm and its set of data – mobility and traffic models have been selected to compose the network scenario. This scenario contains 50 nodes with random mobility and its simulation takes around 3995 seconds to run. The initial topology is shown in Figure 1.

Figure 1: Initial scenario
To evaluate percolation in the PRoPHET scenario, both the buffer size and the number of message replication in routing information were chosen as the percolation variables that should establish a percolation threshold in the model. Next, it was necessary to check the stimulus selected and see if it was able to percolate in this scenario. In other works, this work checked if there is a buffer size limit and a message replication level that determine a stable number or a successful delivery rate of the network. In this case, messages were associated to the fluid, and both buffer and message replication to the surface when considering the analogy with the liquid percolation model used to establish when a fluid starts running through a given surface.
Firstly, the buffer size is a variable that was changed between 2 and 100, with increments that also varied between 2 to 10 with an increment of 2, while all the other variables were maintained fixed. Message replication was also set to 1. Figure 2 shows the results of this first evaluation with a stable state reached for a buffer size of 12. Hence this is the limit value of the percolation model, but the best configuration, the one with highest delivery rate, is obtained with an even smaller buffer size equal to 6.

Figure 2: Percolation of one node according to set up buffer size for all node
Next, the number of the message replication was changed in the simulations from 2 to 100, with increments of 2. In turn these increments where also set to values from 2 until 10 with a step of 2. Similarly the other scenario variables were maintained constant including the buffer size set to 100. Figure 3 illustrates the results with stable state when the number of message copies is 24 or any other larger value. Hence, 24 is seen as the percolating number of copies, but the best delivery rate occurs when the number of copies is 2.

Figure 3: Percolation of one node according to set up message replication for all node
Both the variation of buffer size and message replication of only one node are also evaluated. Firstly, the buffer size of only one node is a variable that was changed from 2 to 100, incrementing this by 2 to 10 increments, while all other variables were maintained. Message replication was also maintained with a single value and buffer size of 100 for all nodes. Figure 4 presents the results of this evaluation with a stable state reached when the buffer size is 10. So this value represents the limit value of the percolation model.

Figure 4: Percolation of one node according to set up buffer size of the only one node
Next, the effect of message replication is studied. In these simulations, a single node was selected and subjected to changes in its level of message replication. In this case, node 24 had his message replication mechanism changed to use values between 2 and 100, with increments between 2 and 10. In the scenario presented here, the buffer size was set to 100 for this special node while all variables were left unchanged. For the other nodes, message replication was fixed at 1, i.e. a single replication was used, while their buffer size had the value of 100. One can see from the results that, for example, when node 24 (the differentiated node in this scenario) sends 2 replications, it already obtained almost a 50% delivery rate. Percolation was achieved when the level of copies reached 4 (Figure 5).

Figure 5: Single node percolation according to its own message replication
In a different evaluation, the number of message replication and nodes that may replicate messages were changed. The number of nodes allowed to replicate messages was incremented until reaching 50 nodes representing the total nodes of the network scenario. The other nodes continued with their default behavior where only a single copy of any given message was injected into the network. So, initially, a unique node was making 20 copies of messages it was sending, in the next round of simulation two nodes could make 20 copies of a message and send this, then 3, 4 nodes until all the 50 nodes where doing this. Strangely, the results have shown that even when there were different numbers of nodes with the same number of replication (heterogeneous network), when the number of copies was 50,100 and 150, the delivery rate of node 24 presented the same behavior as in Figure 6 (a special and identical function). There is a stable behavior when the number of copies is 50, 100 and 150. For example, when there are 3 nodes with 50, or 100, or 150 copies, the delivery rate is 27%. When the number of copies for 7 nodes is 20, 50, 100 or 150, the delivery rate is 43%. This shows that it is necessary to run extensive simulations, for any given scenario, in order to determine the minimum number of copies as well as the minimum number of nodes making message copying that offer a good delivery level that ensures a required quality of service level to node 24 in this case. It is clear that one needs to establish the percolation pair (replication level, number of nodes that do message replication) in order to ensure that:
  • All the nodes of a network are able to achieve their minimum QoS requirements in terms of message delivery in this scenario. Other QoS parameters such as bandwidth, delay and jitter may also be observed as reference levels to achieve;
  • To avoid clogging the network with unnecessary message copies and possibly leading to the congestion of parts of the network, one needs to consider the traffic overhead and network occupation as new metrics for choosing the right percolation pairs that may achieve similar levels of message delivery or other QoS requirements.
Figure 6: Delivery rate for 20, 50, 100 and 150 replicas
Actually, by observing only node 24 delivery rate, the simulations have shown that a configuration where 8 nodes make 20 message replicas each gives the best delivery to this node.
New studies and simulations are needed to gain more insights on how to build some rules of thumb for the optimized routing configuration for this type of algorithms. Although each topology may need a different setup, but one expects to hopefully find some general rules that apply across the spectrum of 4G topologies and provide adequate routing performance.

ANALYSES OF UNTRADITIONAL ROUTING | Improve Routing and Future 4G Networks

The restrictions imposed by traditional network technologies were presented and we showed how new ways for thinking about routing have emerged to overcome these. They include insights and parallels made from observing a number of biological, social and epidemic behaviors. A number of proposals, associated to these metaphors, make use of mobility patterns, pheromone levels, user habits and profiles, relationships and other types of stimulus to offer self-organization, load balancing, adaptability and advanced technology dependent routing. This section is going to perform some concrete evaluations to show and determine the impact of some of network and other important parameters and examine their configuration. To achieve this, the reader is invited to review some optimization and evaluation techniques that are very much relevant to the context of routing in future networks,


The percolation theory is inspired from the observation that there is a limit value for a physical material to make a transition between two states called by "critical phenomenon". For instance, water (a fluid) has two states: liquid and gas. A bottle of water may transition from the state liquid to gas when submitted to a higher temperature, namely, at 100°C at sea level. Another example is that of a filter where there is a given alpha number of porous in a stone. When the number of porous reaches a threshold, water, then, passes to the other side of this stone. These probabilistic changes of states are defined according to a percolation model that uses a threshold to determine such transitions. Hence, such strategy would help determining which routing parameter values would cause percolation, or successful knowledge sharing in the context of future 4G networks.
Some works set up a static percolation coefficient value in order to improve routing. The spatial gossip is an example of a routing algorithm that used this to select the forwarding node. Other works chose to evaluate the environment to discover when such algorithm percolates. For instance, one could seek the relation between buffer size and the success delivery rate. Otherwise, one could check if there is a limit buffer size that determines success or no delivery of messages. The analogy in this example associates messages to a fluid in a percolation scenario and nodes to the surface. Consequently, when all the messages start going from the source and reach their destination, one says that routing has percolated.

Diffusion and Chemotaxy

Adolph Fick was among the pioneering researchers who studied extensively the diffusion process. He observed that salt movement occurs from high to low concentration in liquids and defined an equation to express the proportionality between the flow and the spatial gradient of diffusion. Other researchers also studied the diffusion observing a spontaneous particles movement from low to high concentration. However, there is a common concept among these equations: they expressed the movement of cell or substance to obtain equilibrium, considering, in general, the position as a variable or both time and position.
Similarly, Chemotaxy is a movement behavior according to the gradient of concentration, but it is not a spontaneous event. Chemotaxy represents the attraction or retraction among cells due to some substance. It is commonly used in biology to analyze the behavior of human cell, virus or bacteria. However, such behavior has been analyzed and shown to also benefit the routing environment. Routing policies could be seen as the substance that modifies spontaneous movement.
Given that some message forwarding is based on a probabilistic mechanism set according to the encounter frequency of nodes (i.e. PRoPHET). We could evaluate the diffusion by modifying node movement in order to verify whether node mobility could be a stimulus to influence this behavior or not. In other words, we could check whether node mobility increases or not the message delivery rate.
Considering that PRoPHET could also be executed in sensor networks, policies are likely to move a node by several spaces in order to increase the encounter frequency and as a result may be used to improve the delivery ratio. Alternatively, one could set fake information altering encounter frequency, the message delivery decreases, because messages are removed from a buffer before actually finding their destination.


Pierre-Paul GrassĂȘ introduced the Stigmergy concept after studying nest building. He observed that there is an indirect communication used by social individuals in order to coordinate their efforts towards some objective. For example, Ants lay down more pheromone when they find food to enable other ants to detect and react to this stimulus. In summary, they indirectly interact and cooperate to feed (or finding a path in the routing analogy). Although it is a comprehensive behavior, there is lack of mathematical models or equations to describe Stigmergy. Typically, the stimulus is not reached by some well established known equation, one may consider a given variable as stimulus to Stigmergy behavior and verify whether only a node with a fake variable can modify the Stigmergy of all individuals of a group and consequently the environment
Given that the decision mechanism of PRoPHET routing evaluates the number of encounters of neighbors, we could setup the encounter frequency for a single given node with fake information and next observe the success ratio. The encounter frequencies are used by such node as a Stigmergy where the nodes collaborate with this information to route the information.
Related Posts Plugin for WordPress, Blogger...