Mathematical Appendices

Main Menu Previous Topic Next Topic

(back to topic 7)

**Mathematical Appendix for Topic 7**

In order to build the simulations of Topic 7 and explore the implications of its hypothesis, we had to express the underlying ideas in mathematical terms. In the end, we implemented the following framework.

Every agent possesses a knowledge vector, consisting of **n** components.
Each component is a positive real number, and represents a particular type of
knowledge. The component's magnitude represents how knowledgeable the agent is
in that area. For any two agents, we need to come up with a measure of
similarity (or, alternatively, disparity) of their knowledge vectors. Euclidean
distance seems like a good disparity measure: for instance, it is equal to zero
if and only if the two vectors are exactly the same.

Unfortunately, Euclidean distance scales with vector size. Intuitively, two vectors don't become more similar to each other if their magnitudes decrease by the same factor. So, we use the fact that the component values must be positive to scale our disparity measure by the maximum possible distance, given the vectors' magnitudes. See the following Figure:

Subsequently, we calculate the amount of knowledge shared as proportional to:

1. Value of communication, proportional to **p**, the disparity in agent
perspectives;

2. Ease of communication, proportional to **-log p**, the similarity in
perspectives.

Finally, we ensure that information travels strictly from more knowledgeable to less knowledgeable agents, so that no agent can possibly gain more of a particular knowledge type than the agent who initially possesses most of it.

(back to topic 7)