Harry in Central Park
So here we are, right on the steps of Belvedere Castle in the center of Central Park, just off of West 81st Street. And I am looking for the Weather Observatory, which Google tells me is right near here, but I've never been here. So we're going to try and figure out exactly where it is we're going to walk up there.
Yes, that's me in Central Park trying to find the Weather Observatory. You may not know this, I bet a lot of New Yorkers don't, but they've been recording the weather here at the Observatory for over a century. In fact, it's one of the longest active weather stations in the country. While, I've never been to the Weather Observatory in Central Park, well, I'm a huge weather nerd. Huge. I've always been this way. Growing up, I was obsessed with knowing when it was going to snow because what's better than a snow day? But it quickly went beyond that. I joined online weather forums when I was in middle school. I went to Penn State weather camp when I was in high school. I even purposely chose a college in New Hampshire that averaged over 60 inches of snow per year. To me, forecasting the weather seemed like magic. So over the years, I've taught myself about weather models, probability of precipitation. Heck, I've done a lot of forecasting myself. Which brings us back to my quest to find the Weather Observatory in Central Park.
Harry in Central Park
This is kind of ridiculous, in the sense that it is, it's just right here. It's like no bigger than like a small room almost if it was like outside. And it's just, all these weather instruments, as people walk by almost not even noticing it.
My trip to the observatory was cool, but like I said while I was there, it was so odd to see all these people walking by without even noticing it. And in many ways, I feel like that works as a kind of allegory for much of our relationship with weather forecasting. You see, over the years, as I've followed forecasts and messed with weather models myself, I've noticed that for something that affects literally every human being, every single day of their lives, people have a lot of misconceptions about weather forecasts or don't really even know how they work, which is funny because people love the weather. In fact, a 2019 survey from the Pew Research Center found that of all the topics covered on local news, broadcast, weather, weather was the thing most people said was important to their daily lives, which makes sense, right? If a forecast says it's going to rain later today, you're probably going to take an umbrella with you. But that's also why folks love to rag on forecasters when they get it wrong. Heck, part of the reason why I wanted to do an episode about the weather was a forecast that called for the, quote unquote storm of the century, only to whiff New York City almost entirely. So come hell or high water, by the end of today's episode, I guarantee that you'll understand how forecasting works, why it's never been better, and how climate change may affect our ability to predict the weather. I'm calling for clear skies, a light breeze, and a whole bunch of information about the art of weather forecasting. I'm Harry Enten and this is Margins of Error.
So I think everyone has a weather story from when they were kids or when they first became interested in the weather. Most meteorologists that I know have one.
This is John Homenuk. He's a meteorologist, storm chaser and founder of the forecasting blog New York Metro Weather, which he started back in 2008.
I remember very vividly, when I was seven or eight years old, being rushed to the basement by my parents because there was a thunderstorm or a possible tornado. And the memory that I have in my head is feeling really scared and helpless. That was a defining moment for me where I said, I want to learn about this so I don't feel helpless, when it comes to forecasting it and figuring it out, because I did not like that feeling at all, and I still remember it like it was yesterday.
So here's John now, supplying New York City with daily weather forecasts.
We've kind of built this community where people can learn and talk about the weather and there's so many people that are interested in the weather. And that's become a place where I can interact with people and kind of develop a relationship with them. And what I love about it is communicating it to people and giving people the opportunity to learn and understand it.
So I think it's a natural segue to this question, which is explain to me how forecasting works.
Meteorology is extremely complex, as I'm sure you know, and when I describe to people what it is, it is very literally, our job is to predict the future. And so in order to do that, we need to start by understanding what's happening right now. And so we will take a large amount of time in the morning to look at radar, satellite, all kinds of things that are available to us. Then we want to try to predict what's going to happen in the near term, so the next couple of hours. And we look at how things are trending, current observation wise, short term weather models start to come into play. As you start to get a little further out into the next day and the day beyond that, things start to get more complicated because weather models are, you know, attempting to simulate a fluid process, which is the atmosphere. And so we have to be very careful as you get further out from just a couple of hours, in which weather model are you using for guidance, you know what, how are you weighing that in your forecast? And you're trying to put together a picture of what makes sense moving forward, not just a couple of hours, but now we're several hours to the next day.
Like I said, complicated.
And so when I explain it this way to people, most people start to garner an appreciation for the fact that just even getting the forecast right the next day is, when you think about it, pretty crazy. I mean, we're able to give an immense amount of detail as to what's going to happen tomorrow based on what we're looking at today and the technology that we have. So then when you start to stretch it out into the medium range, which I consider days like three to six from today, you start to get into a much more complicated conversation of how you're using the weather models and understanding the weather patterns.
When listeners hear weather model, what does that mean exactly?
So a weather model is very literally a code that is written to try to simulate the atmospheric processes. So it takes the current conditions that are ongoing right now and then tries to roll it forward, simulating all of the processes that we know exist in the atmosphere.
Talk about the models themselves and sort of how you're able to weight them in the forecast and and just sort of figure out where the heck things are going.
Yeah, so, so weather models, there are a bunch of them. I mean, there's, there's the main ones which are is the GFS global forecast system that's kind of known as the American model. There's a European model, there's a Canadian model, there's a U.K. weather model. And even beyond those, what we consider global models. So those are models that are forecasting the weather across the entire globe. There are models that are more specific and inherently they're going to be wrong. They're going to be incorrect. And as meteorologists, our job is to understand what this what story they're trying to tell us and how to use each of them individually to kind of put the pieces of that puzzle together.
And I can say this from experience. Taking what the models are telling you and using them to make your own forecasts, well, that's where forecasting becomes an art.
The problem, I think for meteorologists and the challenge is, understanding how these weather models work. So each of these models almost has a personality of its own. They have biases. They tend to handle different types of weather systems in certain ways. And, you know, for example, the GFS usually is way too progressive with coastal storms. It likes to slide them out to sea too fast. That's been a bias in the model for a decade now. As a meteorologist, we have to factor all this in to our forecast.
I think that makes sense. Let me ask you this question. Why and how have our forecasts gotten so good?
Well, that's a multifaceted answer. The best way I could answer that, I would say, is I think people have taken the time and invested a lot of time into understanding the atmosphere and, you know, weather models are one thing, right? There's there's amazing minds working on producing these weather models, which are incredible, right? They've gotten so detailed. I keep mentioning we can predict individual thunderstorms, things like that. Incredible technology boom has helped us a ton. We have a lot of fire power behind these weather models. But I always go back to the fact that people have taken the time to understand how to use these models as guidance, properly. And so our forecasts have gotten good because we've gotten better at communicating it to people, communicating to the public, which has led to these forecasts just being more reliable overall.
If forecasting has improved so much, why are we still getting forecasts wrong? Sometimes?
Yeah, I mean, I think it's a very fair question to ask. These things are complex and the weather prediction has improved so much everywhere. But we're still learning the bigger ones. The big, the big storms are still very complex and very difficult to figure out where it's just, you know, the joke sometimes in the thunderstorm community is a farmer can sneeze in Oklahoma and change the whole setup. And it's kind of like when you're watching a storm coming up the coast, that could potentially be huge or could not be, it's just a tiny little thing makes a difference. And so the big thing in meteorology is we try to do meteorology, not modelology. We don't want to just look at the weather models and use them only because that's how you get yourself into a bad place. And so with the big storms, you almost have to sit down. I remember during January 2016, I sat down and made a list of reasons why the storm would miss New York to the south from a meteorology perspective. And I had just as many reasons that it would hit as reasons that it wouldn't. And so it's those storms that are right on the periphery that I think, you know, it's it's going to be a while before we can confidently say, oh, this one's definitely going to hit New York. It's just it's just too complex and too intricate for us to get to that place.
So I guess the obvious follow to that is, you know, as I look towards the future of weather forecasting, is it really possible that we're ever going to be 100% accurate all the time with our weather forecasts? And obviously, that differs from day to day. But take that question where you might want to take it.
Yeah, I, I go back and forth on this. This is, you know, I don't think so. Just as recently as last month, we had a thunderstorm event in the Midwest where every weather model that we had and these are some of the best, most powerful weather models, the highest resolution. They all said that no storms were going to form and 4:00 pm, lo and behold, a huge thunderstorm forms. And it just as a reminder that we still have a long way to go because the weather models are trying to simulate a fluid process in the atmosphere. And it's extremely difficult. It's extremely difficult to do. We've come a very long way, but I don't know that will ever see a point where we can be 100% confident in just letting the weather models roll and meteorologists taking the day off.
So maybe we'll never get to 100% accuracy with our weather forecasting, but that doesn't mean we can't improve it. Right. And after the break, I'll talk with a trio of meteorologists about what they see as weather forecasting's big problem. Plus, I'll tell you about the thing that inspired me to make this episode: a 2001 storm of the century, that wasn't. That's after the break.
Hey, folks, welcome back. So hopefully you now have a better understanding of just how complicated weather forecasting really is. Of course, not everyone does. And so our meteorologists get a forecast wrong. Well, lots of folks tend to be snippy.
I've had people that say it must be nice to work in a field where you can be wrong 50% of the time and be paid still, when, in fact, as you well know, we're right most of the time, I think there's a perception somehow that we're wrong because people tend to remember the occasional bad forecast that maybe wreaked havoc on their cookout or their son's soccer game.
This is Marshall Shepherd. Among many other titles and accolades, like more than a decade working at NASA, he's the director of the University of Georgia's Atmospheric Sciences Program and a former president of the American Meteorological Society.
One of the things that has continually amazed me in my career, Harry, is that you have people that get very angry about occasional poor weather forecasts or doubt climate predictions from experts. Yet they ask me with a serious face what I think of a groundhog's forecast for spring. I say it's a rodent. I mean, it has very little skill. But, I mean, there are people with a straight face that ask me about the groundhog or almanacs and those types of things, but then dismiss sort of occasional sort of science based modeling and so forth. So there is an improbability and an irrationality that I have found in how people consume or see the weather.
I've seen it too. Heck, just check Twitter any time it rains when a forecast has predicted a sunny day. But for Marshall, well, there's a way to fix this problem. And it's not about raising public awareness of how forecasting works.
So much of my career, whether it be at NASA or even now, as the research I do at the University of Georgia, has been developing capacity to better understand weather processes so that we can predict them better. But in that time, I've also developed some degree of expertise in communicating aspects of weather and climate from a risk perspective. And I know that's something that you've been quite interested in as well, because, you know, one of the things that I'm known as saying is we can have the best satellites and radars and computer models in the world, but if the end game forecast doesn't get to the person in a way that they can use it or the company or the government agency, was it a good forecast at all?
I have to ask, how can we make forecasting and weather news more understandable and digestible for regular folk, do you think?
It's a good question. I think the next great revolution in weather forecasting is not the next great radar or satellite or model. It's in social sciences. You have so many more communication psychologists and sociologists now working at the intersection of weather and communication, because we know that people consume colors a certain way. We know that some people can't locate their home on a county map. They can't identify the county that they live in. And so if you're issuing a tornado watch or warning for a county by posting it on a website somewhere or on their phone, and they can't locate their county, is that a good metric or a warning? Even in terms of the types of language. If you look at the Storm Prediction Center, when they issue these storm outlooks, they use language like enhanced and moderate. And some people say that they're counterintuitive in terms of how people interpret those things. So I think this focus on sort of how people consume what is communicated will move us forward.
Now, this was something that came up during every single interview I did for this episode -- that people will think forecasting is better when meteorologists get better at communication and different meteorologists have different ideas for how to do this. Like take this question I posed to John Homenuk, who runs the New York Metro Weather blog. Something I'm asked very frequently is essentially "okay, when a forecast says there's a 40% chance of rain or a 50% chance or a 60% chance, people go, what does that mean exactly?"
Yeah. So I want to start by saying that we've gone away from that so significantly in our products because of the different understandings of it. I personally have several different understandings of it. I think it can mean many different things. You know. I take it very literally, like if I'm putting a forecast out for New York City and I say there is a 70% chance of rain today, the way that I use that is there is a 70% out of 100 chance that you are going to see rain today in New York. That's kind of the way that I do it. Other meteorologists have come to me and said, oh, actually it means 70% of the area is going to see rain. And so for me, it's like, okay, this is all way too confusing, right? Why don't we use some words that people can understand a little better? It's like there's a chance of scattered showers this afternoon between 3 and 5 p.m. And people have taken to that and they've said, okay, that makes a lot more sense to me than 30% chance of rain for the entire day. And I can plan my day accordingly.
So one path forward is moving away from percentages and making forecast in that kind of conversational language that people use in their day to day lives. Another option? Tapping into the fact that out of all the conversation topics in the whole wide world, people friggin love talking about the weather.
It's fascinating. It's something they can become involved in with very little effort.
I've been with the Washington Post Capital Weather Gang for about thirteen years, now. My title, there is Information Lead. As you know, there's just an unending amount of data in the weather world. So it's a fun title.
The Capital Weather Gang has their own approach for communicating and contextualizing uncertainties and forecasting via something they call "boom bust." Basically, if a storm seems likely, but there are a variety of possible outcomes, they'll give a boom scenario. This is how the storm could give us eight inches. And a bust scenario: if this happens, we'll see clouds and drizzle, but nothing else. This way, they can explain a few possible outcomes such that no matter what happens, people can better understand the forecast and the possibilities.
I think we still get a lot of people making fun of the Capital Weather Gang style because, you know, it'll be forecast to 4 to 8. But then if it busts, it's 0 to 3. And if it's a boom, it's 9 to 12. So people are like, "oh, so your forecast is 0 to 12 inches. Good job, guys." So there is, I think, still work to be done in that area. But around here, it's not that hard because the public is very sort of attuned and educated. So they're all sort of ready to nerd out with us. I'm always, I'm always surprised how many weather geeks there are out there.
But for as much as people like geeking out about the weather and trust me, I get that, there are a lot of people that just want to hear exactly what the weather will be like every single day of their lives. And according to Marshall Shepherd at the University of Georgia, that's an attitude that needs to change.
One thing that I do want to take this opportunity to say is, we as a public have to be okay with preparing for an event that doesn't happen. What I mean by that is I see people get really angry if they are warned for a hurricane or warned for a tornado, and they make preparations and they come back and their house is still standing. That's a good thing. But we actually feel, because we took the time to prepare, and there's this angst that, well, something should have happened because as I often tweet and say, "I'm okay with preparing for the worst and having the best outcome happen." I mean, that's why we have car insurance.
Which actually brings me to the thing that inspired this episode to begin with, John Bolaris and the supposed storm of the century. Meteorologists from Washington, D.C., all the way up to New York City and Bolaris at the NBC station in Philadelphia, were calling for a huge, huge storm in early March 2001. We're talking 14 to 28 inches of snow, big. Bolaris was one of the first meteorologists to really pump up this storm. And the NBC station in Philly ran a crawl during sweeps week that said it could be one of the worst storms in the last decade. Well, the storm did hit, it did, just not where anyone was predicting. Northern New York and New England got 24 to 30 plus inches of snow, but of the major metropolitan areas in the mid-Atlantic, only New York City managed at least two inches. And Philly, the main focus of Bolarias' forecast. well, from March 4th to the 6th, it got one whole inch of snow. The result was a major, and I consider really, really unfair backlash to Bolaris. He got death threats, but somehow things went even beyond that. According to Philadelphia Magazine, Bolaris received pages torn from the Bible and a beer bottle stuffed with dead crabs. And that's not even the worst of it. Adding salt to the wound, Philly Magazine reported that a man urinated on Bolaris at a Philly bar saying, quote, "it doesn't look like snow." I've never seen a television weather personality treated the way Bolaris was. So the Bolaris example really has stuck with me all these years. It makes me realize how important it is to get the forecast right and how important it is to be humble, especially when we're encountering something unprecedented. Indeed, today, our entire climate system is seeing something unprecedented which may be impacting our ability to forecast at large. We'll get to that after the break.
Hey, y'all, welcome back. So at the beginning of the episode, I shared a stat about how weather is the number one topic that people value on their local news coverage. But perhaps unsurprisingly, people do not have the same affinity for a related topic: climate change. According to a 2019 Washington Post poll, just 10% of Americans say they often talk about climate change with friends. However, according to a 2016 study from Pew, 71% of Americans say they've had a conversation about weather in the past week. Of course, this episode being all about weather, well, we'd be remiss to ignore climate change. And as it turns out, according to a recent study from Stanford, it may actually affect our ability to predict the weather. When I read about that, I had to learn more. So I called the lead researcher on the study.
My name is Aditi Sheshadri. I'm an assistant professor of earth system science here at Stanford. I have a couple of degrees in engineering, mechanical and aerospace engineering, and a Ph.D. in atmospheric science.
So basically, you're extremely smart. You took all of the courses that I was petrified, and despite the fact that I had a huge interest in weather in middle school and thought I wanted to go and be a professional meteorologist, and saw all of the math and I like math, but not this type of math. And you actually did it. Is that, I think that's a fair understanding of what you just said.
I have done a lot of math. Yes.
So why don't you just tell me some of the areas of research that you're most interested in?
So I'm very interested in sort of large scale problems of atmospheric circulation. So I think a lot about planetary scale things like the jets tream and the polar vortex, specifically. I wake up every morning and think about the polar vortex, really. And I've also become increasingly interested in climate modeling, because climate models are our best bet of seeing something about how climate will change in the future. And on sort of the more applied front, I'm also very interested in attempting to improve the accuracy of these models.
Unsurprisingly, climate models play a big part in Aditi's study which looks specifically at climate models in the mid-latitudes and in the northern hemisphere that covers places like North America, most of Europe, almost all of China. And in the southern hemisphere, that includes places like New Zealand, most of South Africa and Argentina.
So there are two parts of the study. In the first part of the study, we just took a climate model. We brought it to different climate states. So we have an earth, for instance, which is 16 degrees cooler than today's Earth. We have an Earth, which is 16 degrees warmer than today's Earth and a bunch of climates in between. We did this analysis of how quickly errors grew across all of these different climates. You just take your model, you run it forward, say 20 times with slightly different initial conditions. You see when the errors stop growing. And you do that for each of these different climate states. So that's called error saturation. And that gives us some measure of how predictable weather will be in that climate. At the end of all of that, you realize that there's a pretty systematic relationship. If the Earth was very much cooler, then we would be able to say something accurate about the weather a little further out than we would today. And if the Earth were warmer, it's the other way around. And this window of accurate weather prediction narrows as you get warmer and warmer and warmer.
When it comes to the overall results, is there sort of an explicit answer, would you say, about how much more quickly forecasts hit error saturation in warmer climates?
Sure. So I can give you a number and then I will also say that it's dependent on the model, probably. So it appeared that for rainfall, it was about 0.3 days less in the window of of accurate prediction for each degree Celsius rise in temperature. And a little bit less for things like wind where it was 0.2 days per degree Celsius rise in temperature. So if you go up by three degrees Celsius, you get a day less in terms of rainfall.
Which may not seem like much, but according to data from the National Atmospheric and Oceanic Administration, Earth has warmed by 0.08 degrees Celsius every decade since 1880. By now, Earth's about one degree Celsius warmer than it was then. And the World Meteorological Association warns that Earth's temperature will indeed continue to rise over the next few years. So even as our weather forecasting is getting better and better, well, we're also making things more difficult for ourselves.
We are all very used to thinking about weather prediction. And if the weather forecast says it's going to rain tomorrow, I will probably take it pretty seriously. If it says it's going to rain ten days out, I'm going to take it kind of less seriously. If it says it's going to rain 14 days out, I'm not going to take it seriously at all. And so we all have the sort of instinctive understanding that there is a limit to the accuracy of the forecast. And so what we showed was that the limits to the accuracy of the forecast, changes is based on the basic temperature of the Earth. It's this really nice link between weather and climate. So in warmer climates, we expect a shorter window of accurate prediction. And in cooler climates, we expect a longer window of accurate prediction.
Is there any way to sort of future-proof our forecasts in light of these results? Or do we just need to embrace the fact that if our climate is perhaps, in fact, going to become warmer and obviously climate change can have different regional effects, I grant you that, but let's just say we're in a place where it becomes warmer, that our forecasts will become less accurate sooner. Is there anything we can really do about it?
I would think that there are going to be systematic improvements in our forecasting systems with time. But the point of the study is that this is an inherent limit. It's not something that we can do anything about necessarily. And I do hope that this is something that has opened up more questions than it's answered. So I hope that both in terms of my group and anyone else who is interested, there's going to be a lot more work on this front to try and make sense of the result as well as extend it.
I've been wanting to do a weather episode on this podcast since Margins of Errors' conception. The reason is pretty simple. I love the weather and I know most of you do too. It's something that has a universal impact. Just now, my girlfriend told me that we needed to get going because her phone said rain was expected to start falling in 20 minutes. I, ever the stickler, had to check the radar and disagreed with her phone. Side note, I was right. That's one of the great and annoying things about weather forecasting. There's room for interpretation. It's partially art and partially science. And we haven't perfected how to exactly know what's going to happen. No one is Nostradamus when it comes to meteorology. The good news, and there is good news, is that our ability to forecast is getting better. Whether that something is pedestrian is making sure a walk to get fro-yo is dry, or as serious as giving people enough time to take cover from a tornado. My hope is that this episode helped you understand why we should all care about weather forecasting and why it's so hard, and may only get harder. So next time your local weather person screws up the forecast, remember that usually and amazingly, they get it right. Coming up on our next episode, it's a bit of an understatement, but the world of travel is in a weird place right now, so we're going to take a look at some of the best ways to get out and about these days from cross-country train trips to the ultimate road trip. Plus, a look at what may be the biofuel of the future. That's coming up next time. Margins of Error is a production of CNN Audio and Western Sound. Our showrunner is Cameron Kell. Our producer is Savannah Wright. Production assistance and fact checking by Nicole McNulty. Mischa Stanton is our mix engineer. Additional support from Tameeka Ballance-Kolasny, Dan Dzula, Allison Park and Alex McCall. Our executive producers are Ben Adair and Megan Marcus. And me? Well, I'm Harry Enten.