‘White fronting’—is a strategy used by Black, African digital entrepreneurs to improve their odds of attracting financial investment. They hire a white CEO or partner with a white colleague who acts as the ‘face’ of the company. A common response from other researchers to the idea of white fronting as a strategy, is the instinct to validate this information. Is it anecdotal? Should you believe the subject(s)? Uh-oh…have you “gone native”? “What’s your n”? I raise these responses to probe the underlying assumptions inherent in the term ‘big data’ and whether how we think of data validity will lead to exclusion in the data-driven age.
As a mixed methodologist, I certainly do not wish to spark an internecine methodology war, but let me state at the outset that I stand with the qualitative researchers of social experience who agree that depending on the question, n = 1 is as valid as n = 100, and that this is also an ethical stance. As a buzzword, ‘data-driven’ is used to convey robustness, legitimacy and objectivity. Even when ‘big data’ is not referenced, ‘data-driven’ often implies data of the kind that has high frequencies—the data of the majority—where n = 1, or n = 30 might not viewed as objective or important. The tendency to trust machine-based data is rooted in the certainty of quantity. An algorithm can gather millions of data points, and some perspectives hold that this translates into an approximation of the truth. Depending on the inquiry this is easily co-signed, but research by Safiya Noble, Ruha Benjamin and many others informs us that we should abandon this belief in the objectivity of technology, particularly when dealing with social worlds (even models that calculate epidemiological risk during a pandemic turn out to be up for debate).
During a webinar on Inclusivity & Equity in Data Economies on September 3rd we were able to mine the lived experiences of experts in the digital economy drawn from across the continent: Nichole Yembra (The Chrysalis Capital), Ebrima Fatty (Afrikasokoni), Simunza Muyangana (BongoHive) and Agostine Ndungu (Endeavour Kenya). The participants generally acknowledge the inherent bias in the investment arena that might lead to white fronting but, like many other actors I have encountered, differ on what the bias represents and where the solution lies (e.g. new policies in Kenya around co-ownership–some argue that preferences for one’s own is typical of social systems, and therefore not something worth legislating away).
The discussion reinforced that involvement in the digital economy arena almost requires a personal belief in the objectivity of markets—that markets are big data collection technologies in their own right. Any failures are often characterised as markets being distorted, rather than them working as they were designed. More data is often seen as a means for undoing these distortions. For instance, an expectation that if you can evidence the performance of Black-fronted firms, investments will follow the data.
When the claim is inherent bias in the investment market for digital enterprise, there is indeed a definite need for data to validate this claim. This is why, I thanked one of the participants, Agostine Ndungu, for generating the quantitative data that can mollify the “What’s you n” researcher whose scientific perspectival place is rooted in the validity of aggregates and frequencies. But data, big or small, is not enough to undo ‘distortions’. Given the tendency to mistrust marginal, self-reported narratives, what exclusions and validation of exclusion does a machine-based, data-driven future hold? Who should we doubt and when? How can we create systems that allow ‘small data’ to provide the insight, balance and correction that might be needed?