The complex systems are all around us. We are part of many and many of them are part of us. Our neural network, our social networks, the structure of galaxy, the stock market are just few examples of these kinds of rich, interconnected systems. They consist of many parts and interactions between them which taken as a whole, form patterns and structures that are more than just the sum of their parts. We call this an emergent, self-organizing complex behavior and the field of complex systems seeks to explain and uncover its common laws. The complex network theory has proven as a very useful tool in characterizing and studying this emergent behavior seen in complex systems.
Every complex system can be represented as a network, where network node represent units of the system, while links represent interactions between them. These networks are neither of regular structure, the property of metal crystal lattices, or random, but are rather somewhere in between. In effort to characterize and describe structure of various complex networks, scientists working in complex network theory have developed a large set of different measures, that often are not independent of each other and whose mutual dependence is usually not known. This is precisely the reason why the degree of randomness in real complex networks has long remained a mystery.
What is the sufficient set of measures necessary to describe a topological structure of real networks? A recent work published in Nature Communications gives an answer to this question. The scientist from the Scientific Computing Laboratory (SCL), National Center of Excellence for Complex Systems, Institute of Physics Belgrade, Dr. Marija Mitrović Dankulov, together with her colleagues from USA, Finland and Spain, has shown that the set of these measures is relatively small. They used a method that allows systematic increase of the number of considered measures and applied it to six networks representing different biological, social and technological systems. Starting from the original network, in each step they generated a set of random networks which had some of the characteristic as the original network while randomizing the rest of the network structure, and compared them with the original network. In each step the set of selected features that was kept the same as in the original network is a super set of the ones from the previous step, which allows systematic convergence toward the original network. The number of steps needed to re-create the original network, determines the smallest set of features that are sufficient for the description of network structure. They have shown that random networks with the same degree sequence (the sequence of the number of first neighbors), the joint degree matrix (describes how nodes with certain degrees are connected with each other), and dependence of clustering coefficient on degree (measures the connectivity patterns between the neighbors of node with certain degree, on average), as the original network, have the same values of other topological features as well. In fact, they have also shown that this holds true for all self-organized networks, regardless of their nature. This work shows that even though the distance between random and real world is relatively small it is not insignificant .