Towards AI for Climate Services
On Thursday, February the 22nd 2024, Met Office organised a workshop in collaboration with Nesta and CNR, to explore the impact that Artificial Intelligence can have on Climate Services. This is a very timely topic, considering the enormous advancements of AI in the last years, as well as the growing interest in climate services and the multi-faceted demands coming from climate change adaptation management.
The workshop involved 30 participants convening at the Met Office headquarters in Exeter, UK. The audience comprised experts from climate science to machine learning and collective intelligence. Such a diverse audience and a strong engagement from the participants enriched the discussions, resulting in a very successful event.
The workshop started with presentations from the HACID consortium members, who set the stage about the current AI technologies employed in climate services, and about the need to engage in participatory AI practices, whereby end-users are involved from day one in the design and production of AI technologies. Three pillars for building a successful integration with climate services have been highlighted, proposing that the technology must be at the same time credible, salient and legitimate. Credibility refers to the scientific adequacy of the technical evidence and arguments. Salience pertains to the relevance of the technology to the needs of its end-users. Finally, legitimacy pertains to aspects related to the integration of stakeholders’ divergent values and beliefs, the absence of biases and the fairness in treating opposing views and interests. When a climate service (and the technology that backs it up) fulfils all three requirements, substantial value can be generated. Hence, discussions during the workshop hinged on the way in which AI solutions for climate service can provide sufficient levels of credibility, saliency and legitimacy.
Current developments in AI for climate services focus on automated question answering with large language models like GPT-4, and explore how to provide accurate information based on authoritative sources (e.g., the IPCC reports for ChatClimate, or the UKCP18 publications for a prototype chatbot developed at MetOffice and the University of Exeter), as well as providing climate services based on climate projections for specific regions (ClimSight). In this landscape, HACID stands out as a peculiar approach that merges AI and human expertise to provide tailored solutions to specific service requests. Here, AI provides automated information extraction from large bodies of evidence, hence grounding solutions on authoritative sources and fostering credibility. Additionally, AI supports knowledge discovery and collective intelligence, generating collective solutions from the inputs provided by climate experts. To fully address the needs of the climate service community—and therefore ensure a sufficient level of saliency and legitimacy—the HACID technology is undergoing a participatory AI development, for which the proposed backend and frontend solutions are designed in collaboration with domain experts. For instance, the knowledge graph on which the collective solutions are built is designed with inputs from Met Office experts, and will undergo a scrutiny from the community to audit the conceptual model and identify data gaps.
Starting from the examples of current developments and from the HACID experience, participants to the workshop have been invited to reflect on the risks associated with the development of AI-based technologies for climate services, and how these risks affect the credibility, saliency and legitimacy of the proposed solutions. Several potential risks emerged from the discussion, some generally associated with AI technologies (e.g., accountability, biases, traceability), some other more specific to climate services (e.g., understanding uncertainty, interpretability of the outputs within context, verifiability of the underlying data). In the second session, participants have been invited to propose solutions or mitigation strategies for the identified risks. A breadth of possible approaches have been proposed, such as the foundation of an independent authority to regulate, control and validate the use of AI for climate services, or the need for education about the potential and limits of the technology. Overall, the workshop produced a lot of ideas and important material for future reflections. We envisage that it can be the basis for the definition of a roadmap for the integration of AI into climate service that can guide future development, within HACID and beyond.