Blog

Engendering digital connectivity: Is women’s exclusion in design the missing piece to gender digital inclusion? (Part 1)

The gaping gender digital divide

Digital connectivity is no longer a luxury but essential to connect with the external world. However, access to the digital world and its sustained use are restricted and often gendered. This is revealed in the stark gendered differences in digital access and participation, where women in LMICs are 7% less likely to own a mobile phone and 19% less likely to use the Internet than men, resulting in 310 million fewer women using the Internet than men. This gendered digital exclusion has cost countries USD 1 trillion in GDP, deprived women of digital possibilities, and deepened socioeconomic and gendered inequalities.

Even when women own mobile phones, their usage differs from that of men. This difference arises from their limited range of use of mobile technologies. In Senegal, while 57% of women use mobile internet users, only 17% perform three or more mobile internet use cases daily. In India, only 18% of women perform three mobile internet use cases, compared to 30% of men.

The scarcity of digital use cases for women

Women frequently encounter challenges when they attempt to access the same digital services as men. In India, women use only 4.3 mobile use cases on average, whereas men use 6.3. This disparity is due to women’s limited digital skills, the lack of relevant available functions, and insufficient use cases tailored to women’s specific needs.

The challenge of limited use cases is evident from the limited phone usage despite the distribution of free smartphones to women in Ghana, Kenya, Rwanda, and India. In these instances, the absence of digital literacy and limited use cases for such phones did not translate into women’s use of free phones. The low usage also suggests that mobile phone ownership among very poor people can be precarious. They may encounter other challenges in using these devices effectively to enhance use cases, particularly when these phones are lost, broken, or stolen, and when economic externalities hamper use cases among women.

Digital products and services are often tailored to suit the preferences and needs of the default users, who are most likely male, which inadvertently neglects the diverse experiences of women. In the health sector, for example, women often experience limited digital use cases due to the underrepresentation of their health issues in digital innovations. A well-known example of this was seen when Apple’s Health App launched in 2014. The app was equipped with numerous features to help users track various aspects of their health and fitness. Yet, notably absent was the option for women to track their menstrual cycles. Similarly, other health-related digital tools and platforms have been designed primarily from a male perspective, which overlook women’s unique health concerns and needs.

Where do these gender disparities arise?

Gender biases in AI have roots in the design stage, the training of datasets, or via AI-generated decision-making. Existing sex-disaggregated data gaps aggravate these biases. Facial recognition software exhibited higher accuracy on male and lighter faces but had significant challenges with darker-skinned women and experienced error rates as high as 34.7%. Similarly, AI tools, such as ChatGPT reinforce gender biases and stereotypes. The AI tool offered advice on maintaining a work-life balance and spending quality time with their families exclusively to men and neglected to extend this guidance to women. Similarly, voice recognition systems perform better with male voices and often struggle to recognize female voices accurately.

The Unicode Consortium designed emoji characters of police officers and athletes predominantly as male professionals despite having a 70% female user base. Similarly, women’s representation in video games is also stereotyped, which results from a significant gender imbalance in creative development that directly influences the content and leads to the portrayal of negative and stereotypical female characters.

Men as the default users

A fundamental reason for missing use cases for women is the predominant role of men in designing digital spaces. Men often create digital platforms for male users based on their own experiences and perspectives and inadvertently exclude women’s needs and experiences. They do not consider women’s barriers in accessing digital products, including the type of devices and data packs, digital literacy, and social norms. Such products fail to address women’s content needs, interests, and perceptions.

Furthermore, design limitations of digital products and platforms neglect the female experience. Phones and software are not designed intuitively for low-literacy populations, which mainly comprise women, especially in least developed countries and developing nations. Additionally, women have shared access to phones or cheaper devices with fewer features, including limited RAM, which prevents them from using digital applications.

A large gap persists in data on women in healthcare innovation. Healthcare-related AI systems are driven mainly by men and operate on statistics skewed toward men. This gender gap has led to a lack of women-specific innovations and AI systems. Such systems are 50% more likely to misdiagnose heart attacks in women due to underrepresented physiological indicators, which leads to fewer use cases. This is because fewer women design these AIs, not because fewer women have heart attacks.

The lack of range of data disaggregated by age, sex, income, location, and education on user behaviors leads to design oversights, especially for various women’s segments. This data gap leads to a lack of digital products tailored specifically for women.

MSC’s DEBIT framework helps us understand use cases better by assessing the choice of channels by individuals through five factors: Diffidence, Education, Bias, Investment, and Trust. This perception of gain or loss helps individuals choose their transaction channel, which determines if they will use a digital channel. The DEBIT framework study revealed that the user experienced the least perceived loss (DEBIT) when she transacted at phygital channels and the highest perceived loss at digital channels.

Another MSC study showed a significant gender gap in the usage of digital financial services among male and female garment workers due to limited use cases for women. While 91% of men surveyed visit ATMs alone, less than half of women do the same. This is because women preferred informal financial channels that fulfilled their credit needs and did not see much use in digital financial channels. Women’s uptake of digital products and services remains low due to the lack of compelling use cases, lack of trust, and low confidence in the digital ecosystem.

Digital service providers do not consider gender norms and gender roles or illiterate, innumerate people—the oral segments—when they design new digital products and services, which significantly diminishes the use cases. In the context of financial services, many women do not use advanced use cases even with a regular inflow of cash. This is because of product design challenges and limited communication of the value of these products. All these suggest that provisions or designs that can enhance the use cases of a digital product or service for women will be valuable for its uptake.

Product designs that grant users more control and ensure privacy can also enhance use cases. Even the size of phones, as with most gender-neutral products, are often tailored to the average male hand, which poses usability challenges for women. This is evident in women’s discomfort while they hold and use larger smartphones with one hand, especially for tasks, such as taking photos.

We wonder why is the digital world not gender intentional. We also deliberate on how we can make the digital world gender intentional and discuss women’s roles in this endeavor. Please read the second part of this series, where we unpack this question.

AgriStack – A DPI for farmers and the agriculture ecosystem

AgriStack is being developed as a Digital Public Infrastructure (DPI) that consists of registries, datasets, APIs, and IT Systems. It is enabled by a common set of policies, standards, and guidelines that make agricultural data accessible to the public and private sectors for the creation of services and solutions. The initiative is designed with a clear vision – to simplify farmers’ access to affordable credit, high-quality farm inputs, personalized advisories, and convenient market linkages. It also aims to streamline government planning and implementation of farmer-centric benefit schemes.

In addition, there are state-level initiatives such as the Digital Farmer Services (DFS) platform in Bihar. The DFS will serve as a one-stop solution for Bihar’s small-scale producers (SSPs) to cater to their needs and aspirations through a unified digital platform. It will provide unified and seamless access to services, such as government programs, agriculture advisories, financial services, and market linkages.

It will be built upon the existing state databases and will also be integrated with private service providers for additional capabilities and specialized services.

CGAP Strengthening climate resilience and adaptation through financial services

CGAP commissioned a study to understand the direct and indirect impacts of cyclones and their associated perils on the lives and livelihoods of the affected communities in southwest Bangladesh.

The adaptation strategies of these poor and vulnerable households and the role of financial services in those strategies were also investigated. The study explored pathways to enhance the role of financial services in adaptation strategies and strengthen the resilience of these communities against climate change.

India’s digital inclusion story: Lessons from the synergy of digital connectivity and DPIs

The report was first published on the India Mobile Congress website in October 2023.

Over the past decade, India has seen transformative moments that have propelled its digital revolution onto the global stage. The collaboration between the public and private sectors has been a driving force to foster innovation, improve service delivery, and prioritize user-centered experiences. These initiatives have evolved digital infrastructure and given rise to disruptive ICT innovations, adaptable regulatory frameworks, supportive policies, and an unwavering commitment to customer-centricity. The Department of Telecommunications (DoT), Ministry of Communications, Government of India, has played a central role in this remarkable journey, serving as a pivotal force in facilitating digital connectivity.

Against the backdrop of current and emerging development challenges in both developed and developing economies, India’s extensive digital connectivity and pioneering Digital Public Infrastructure (DPI) programs stand as beacons of progress in the nation’s digital economy. These DPIs are now well-positioned to serve as valuable benchmarks to craft resilient and inclusive digital service delivery models in other economies. Their significance is particularly evident in how they can advance financial and digital inclusion, with a special emphasis on individuals from economically disadvantaged backgrounds, and offer valuable lessons to address the unique socioeconomic issues faced by both developed and developing nations. This report highlights crucial Indian case studies across diverse areas, such as identity, financial services, healthcare, education, and agriculture. It underscores the potential for these cases to be replicated and adopted to benefit developed and developing nations alike.

In a first, Fair Price Shops on-boards on Open Network Digital Commerce (ONDC)

The press release was first published on the PIB website on 7th February 2024.

As a step towards Digital India, Shri Sanjeev Chopra, Secretary, Department of Food and Public Distribution, Government of India launched a pilot to on-board the Fair Price Shops (FPSs) in Una and Hamirpur districts of Himachal Pradesh on the Open Network Digital Commerce (ONDC). The pilot was launched virtually in 11 FPSs – 5 FPSs in Una and 6 FPSs in Hamirpur districts. This is the first time when Fair Price Shops are on-boarded on ONDC.

Speaking on the occasion, Shri Chopra said this landmark initiative adds to the continuous efforts of the Department in transforming the Fair Price Shops. This effort aims at providing additional avenues of income generation for FPS dealers along with enhancing beneficiary satisfaction.

Furthermore, he underlined that this initiative provides numerous benefits for FPS dealers including visibility in the digital marketplace, access to a larger customer base beyond NFSA beneficiaries, and the ability to compete on an equal footing with large retailers and e-commerce platforms. Additionally, beneficiaries who face difficulties in making online purchase can approach the FPS dealer to make online orders on their behalf.

He highlighted that the success of the pilot being implemented in Himachal Pradesh will serve as a model for statewide and nationwide adoption in the future. He also appreciated the support of MicroSave Consulting (MSC) in deploying this pilot program.

After the launch event, a workshop in physical mode was organised for the FPS dealers in Una & Hamirpur districts. The workshop explained on how to catalogue products, service orders, and commission structure on ONDC etc.

Ms. Anita Karn, Joint Secretary (PD), Shri Ravi Shankar, Director (PD), Shri Mitul Thapliyal, Partner, MSC and Shri Saransh Agrawal, ONDC were also present during the launch event.

Can AI help with locally-led adaptation? The challenges.

A world split by the digital divide

Despite the dramatic spread of digital technology, much of the global south continues to fall behind in its adoption and use. Shockingly, only 13% of smallholder farmers in Sub-Saharan Africa have registered for any digital service, and only 5% actively use them. In 2017, MSC documented the reasons why poor people fail to access digital technologies. For women, social factors magnify these barriers.

As a result, many of the communities most vulnerable to climate change are excluded and unable to participate in the digital revolution. This deprives them of opportunities to access critical information, financial services, key inputs, and collaboration. We need to enlist, train, and deploy a range of community-focused players to help vulnerable communities use the growing array of valuable digital tools to optimize their locally-led adaptation (LLA) planning, implementation, and governance. These players could include the staff of community-based organizations, financial service providers with reach into remote rural areas, agricultural extension workers, agriculture input dealers, and cash-in and cash-out (CICO) agents. Indeed, this is probably the only way we can scale up LLA to the levels required by climate change’s rapidly emerging and increasingly debilitating impacts.

Can AI help?

It is immensely appealing to think that AI can play a role in the development, implementation, and oversight of LLA strategies. However, the development of these strategies necessarily requires the identification, analysis, summarization, and communication of a diverse array of information, datasets, and complex ideas. Effective LLA strategies must consider policy and regulation, climate science, ecology, geography, agriculture, health, financial services, and gender, among other factors. AI could potentially play an important role in distilling the key elements and critical success factors from this daunting range of variables.

Yet the desire to apply AI to complex problems that have historically remained elusive or irrelevant to most modern technology or digital developments has often made the situation worse. The UC Berkley School of Information has already shown how artificial intelligence bias affects women and people of color. Much of this bias is because of feedback loops built onto the most readily and abundantly available data to train the algorithm.

The school notes, “AI is created using a feedback loop. Real-world experiences shape data, which is used to build algorithms. Those algorithms drive decisions affecting real-world experiences. This kind of circular reasoning means that bias can infiltrate the AI process in many ways.” These biases will be amplified further for people on the analog side of the digital divide.

If we are to close the digital divide, we will need a highly cautious, context-specific strategy that considers the needs, interests, and capabilities of local participants. The data on which AI is trained is crucial, so if we want to deploy it to assist with LLA, and indeed many development challenges, we must:

  • Avoid the imposition of external or top-down solutions and strike a balance between the use of digital technologies even as we respect and acknowledge local expertise, culture, and values;
  • Ensure local players, particularly those without the required infrastructure, expertise, or money, can access and use digital technology;
  • Resolve the ethical and legal concerns about data ownership, permission, and use, and ensure the reliability, security, and privacy of digital data and systems.

So, what are the implications for digitally-enabled, locally-led adaptation?

Chatbots and natural language processing (NLP) present valuable possibilities to improve access to information for LLA strategies. However, another key limitation amplifies the challenges outlined by the UC Berkeley School of Information: The datasets used to train NLP systems often lack comprehensive coverage of local dialects, native languages, and regional cultural knowledge. When people are stranded on the analog side of the digital divide, it also reinforces that exclusion as algorithms are built and trained on data from those already connected to the digital world. Thus, such algorithms exclude the voices of those who are not connected. We see an instance of such exclusion in the fact that 99% of the world’s online content is limited to only 40 languages.

Limitations in AI technology highlight the digital divide, as experienced by MSC in our recent projects. In India and Bangladesh, we used AI to analyze voice recordings. Despite being trained in the local languages, the NLP systems, developed with commonly available digital voice data, struggled with the dialects and accents of marginalized groups. Additionally, when we attempted to use AI to anticipate responses from rural women for survey follow-up questions, all AI systems failed, as they did not understand these women’s unique challenges.

The guidance offered by large language model AI systems is likely to be either too general or simply not applicable to the local context of many climate-affected communities. Furthermore, the feedback mechanism in the supervised learning process becomes less effective, as it is challenging to measure and correct the extent of inaccuracies or irrelevance in such generalized or inappropriate solutions. These challenges are mutually reinforcing and could lead to lower adoption rates and trust issues regarding the information provided by AI interfaces.

A good example of this arose in MSC’s work with an AI-driven agri-advisory app, which we have been testing with farmers in Bihar. There, we found the following issues:

  1. Compatibility of the application: We found a wide range of mobile phone models and Android versions, which vary depending on the farmers’ ability to afford them. The lower configuration of the handsets and older versions of Android affect the performance and functionalities that the farmer can avail through the app. This served as an important lesson for us for other digital projects, including the Digital Farmer Services (DFS) platform that MSC has been implementing in Bihar.
  2. Local dialect: The sensitivity of the voice detection functionality to local dialects is an issue. The app struggled to identify keywords, which led to instances where the farmer needed to provide multiple inputs.
  3. Maturity of the apps: In the current state of the app, the quality of the prompts decides the quality of the output. If the prompts are not written properly, the farmer gets basic and generic advice, which is not helpful. The app’s responses may not be relevant in some instances, such as when the farmer does not know of a new pest or disease or if its name is in a local dialect that the app cannot understand. Such examples highlight the LLMs’ limitations.
  4. Appropriate learning data: We wanted to conduct a similar experiment in Bangladesh. Yet, despite the app being already available in West Bengal, which shares a common language with Bangladesh, the cost to retrain the app for Bangladeshi agricultural policies, climatic conditions, value chains, and markets was surprisingly high.

Moreover, significant computing and storage resources are clearly needed to train these models, considering the large volume of data produced in local contexts across a region or geographic area. Additionally, these models may need to be enhanced with more neural nodes to preserve the accuracy of the results. Consequently, the cost of these resources is a major concern—particularly given the remote and “low-value” nature of many vulnerable communities.

Finally, the privacy and security of data significantly increase the challenges. Institutions and governments are still struggling to develop rules, laws, and frameworks for the responsible and ethical use of AI. Given this, communities or local government officials involved in LLA strategies are unlikely to trust a digital platform with their personally identifiable information, especially when they are uncertain about the accuracy of its results. Additionally, while people are still vulnerable to traditional phishing and malware attacks, the emergence of AI-generated deepfakes further complicates and intensifies these security issues.

Conclusion

AI could play an important role to support development initiatives in general and LLA in particular. However, as in all other cases, any AI-based solution or intervention is as good as the relevance and authenticity of the data it is trained on. We will need to make very conscious efforts to include the voices of vulnerable communities, typically on the analog side of the digital divide, if we are to realize the potential of AI. Failure to do so will widen and deepen the divide. This is a challenge on which MSC is working—stay tuned for updates!