When it comes to artificial intelligence and weather, think like the Wright brothers.

On May 13th  the National Academies of Science, Engineering, and Medicine (NASEM) Board on Atmospheric Sciences and Climate held a day of sessions entitled Enabling US Leadership in Artificial Intelligence for Weather. A webcast made the experience available to a wider audience, including yours truly. Well worth the watch! ICYMI, here’s the YouTube link.

The day consisted of four segments:

  • A brief introduction and overview.
  • NASA-, DoE-, NSF-, and NOAA  perspectives on “opportunities for federal leadership.”
  • A two-part session on “developing trustworthy AI-informed weather forecasts for end users,” emphasizing respectively (1) “utilizing AI tools to enhance forecast capabilities,” and (2) “focus on trust in AI for Weather Forecasts.”
  • A final section focused on “facilitating novel partnerships across sectors.”

A few takeaways from the day, to give the flavor: meteorological journals are seeing a meteoric rise in paper-submissions based on AI. NWP has been resource-intensive and the province of a handful of big centers such as the European Center (ECMWF) and NOAA here in the United States. By contrast, AI-enhanced or entirely AI-based forecasts are inexpensive and computationally fast. Services based on these tools are proliferating in the United States and globally, especially in the private sector. Analysis of their performance has been lagging but shows that in some circumstances AI-based forecast skill scores are comparable to and even beginning to beat the performance of NWP-based forecasts. And in contrast to slow but steady progress in NWP, AI-based forecasts and services across the board seem to be improving by leaps and bounds. However, before weather service providers and their users can fully trust AI-based tools, that is – integrate them into public forecasts and private use, rely on them in the event of rare extremes, etc. – it is necessary to build a better understanding of how they work, their abilities and limitations across a broad range of circumstances, their stability over time, and much more.

The talks also identified clouds on this horizon. It’s unclear that the current pace of progress can be maintained. A key constraint is the workforce. Demand for AI professionals is urgent in every field of human endeavor and every economic sector. Experts are in short supply, and able to command higher salaries in fields other than the geosciences, and much higher salaries in the big tech firms. The latter are hoovering up talent from government agencies and most critically from the university faculty needed to educate and prepare a new generation of students to enter the field. Peer review vital to quality-control the scientific publications is time-consuming and unappealing to experts competing to innovate. And the evaluation and analysis needed to build trust are lagging and falling further behind week by week. Hence the final panel discussion highlighting the need for non-traditional collaborations, especially spanning the public- and private-sectors, in moving forward.

A closing observation. In watching the video, I saw analogies to the early days of powered human flight. Reading about this captivated me when I was a kid growing up. Around 1900 the emergence of the petroleum industry and development of the internal combustion engine were focusing minds. The newer engines were lighter and more powerful, making them potentially suitable for airborne- as well as surface transportation. The race to be the first to fly was on. The foremost scientists and inventors, some backed by major institutions (Samuel Langley and the Smithsonian come to mind), were in the hunt. But the winners were Wilbur and Orville Wright, owners of a bicycle shop. They saw that the big players were obsessively focused on power and lift. To the Wright brothers, the real issue was balance and control. They used the experience they gained from their work with bicycles and observing birds. Virtually all their patents and success derived from these insights.

The development of AI and its application for human benefit represent a similar challenge today – the need to gain, maintain, and exercise control. The challenge presents itself over a range of levels, which each must be addressed. At the most immediate level, the problem consists in prompting – asking AI the right questions (see also this link). It takes skill and education to capture the full utility of AI for any application. But there’s a meta-level as well, facing AI just as it did aviation. It’s important not just to control AI but also to control the use to which it is put. Wilbur Wright died young (in 1912), but Orville lived to see the full extent of World War II. He had this to say at war’s end:

We dared to hope we had invented something that would bring lasting peace to the earth. But we were wrong … No, I don’t have any regrets about my part in the invention of the airplane, though no one could deplore more than I do the destruction it has caused. I feel about the airplane much the same as I do in regard to fire. That is, I regret all the terrible damage caused by fire, but I think it is good for the human race that someone discovered how to start fires and that we have learned how to put fire to thousands of important uses.

As with aviation, so with AI. The literature, and indeed actual experience with AI in warfare, are rapidly growing. “Control” of AI in this regard may well prove to be an oxymoron. When it comes to nuclear weapons, the world’s governments and leaders have so far shown a fragile form of restraint – stockpiling but stopping short of use. These efforts have been helped by the capital-intensive nature of the technology. Only a handful of nations have the technological wherewithal and financial heft to participate. The situation is analogous to that of NWP in weather forecasting. By contrast AI-capabilities are widely and inexpensively available to practically anyone and everyone. Barriers to entry are low (as we’re seeing in the application of AI to weather forecasting). Control of AI in general, in war in particular (and extending to its trustworthiness in weather applications) will be achieved only if all parties make such control a strategic priority, and invest thought and resources to the task more broadly across society and at far greater levels than we see today.

Success is by no means assured.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *