How reliable are the results in Columinity for small and large teams - theliberators/columinity.docs GitHub Wiki

There is no hard rule for the optimal team size in Columinity. Small or large teams can use it. However, team size can influence the results in different ways.

Columinity calculates its team results based on the participants per segment (team members, stakeholders, supporters). It is a statistical reality that results often become more accurate as more people participate. However, this does not mean the results are necessarily inaccurate for small teams. It all depends on the variation in scores.

Take the example below, which illustrates the impact of team size.

Assume you have 3 team members answering questions for the "Self-Management" factor. Two team members scored a 4 on this factor, which is about average, and one scored a 1, which is extremely low. The mean average would be a 3. If the team would include three more average-scoring participants, you'd get 4, 4, 4, 4, 4 and 1, with a mean average of 3.5

This example shows that the extremely low score has a more significant impact on a small team than a larger one. However, it could be argued that the score of 3 is still accurate for the small team because it highlights that at least things aren't going well for one member. This is less visible in a larger team because the other members are more optimistic. So the smaller team needs to talk about this, and that exact conversation is why we built Columinity in the first place. It might turn out that the low-scoring members interpreted the question differently, which explains the difference. But it might also be a real issue.

It is important to note that Columinity uses more advanced algorithms to calculate team-level scores than those shown above, but the example would become too complicated otherwise. Columinity uses all individual questions as data points and uses median averages, which are less susceptible to extreme scores. Finally, Columinity shows the range of scores in a team and highlights factors with a very high range. This indicates disagreement or at least different perspectives.

Aggregation is useful

Columinity is purposefully designed to aggregate the data from teams into larger groupings. Even if the data at the level of a small team may be less prone to noise, this often dissipates when aggregating into larger samples. For example, small teams can be tagged in the Teams Dashboard to analyze them separately (as a grouping) and to identify patterns and impediments unique to such teams. When small teams are not included in Columinity, such opportunities would be lost.

Recommendations

  • In your interpretation, pay attention to the range of scores for each factor. While small teams are more susceptible to higher ranges, this is not necessarily true.
  • For factors with a very high range (Columinity marks these with a warning), talk with each other if this is an actual signal or just noise.
  • Emphasize the value of conversations based on data over a deep data analysis.

Summary

The more data points you have for your team, the more likely results will be reliable. However, the inverse is not automatically true. Ultimately, it is important to talk about with the results with your team to understand if the signal reported by Columinity is real, or whether it is noise.