Environment featured

Weather Forecasting???

March 28, 2024

Has anyone else noticed the weather forecasting gaslighting? Or whatever is happening… Is this AI? Perhaps a computer generating models based on measured conditions maybe several days in advance and then sticking to it regardless of what actually transpires? Maybe not even based on measured conditions at all but on averages for the date? (Based on what time period…) I don’t know what is happening but weather prognostication is just wrong these days.

For most of this winter, Vermont has had monthly, if not weekly, storms that were complete surprises, not predicted at all. (Except by Dot who comes in once a week to manage the books for a local farm and make weather pronouncements… she is rarely wrong.) Bad forecasting is a problem, but it isn’t that odd or novel. We’ve been messing up weather forecasts for as long as we’ve been forecasting weather, particularly for extreme events like storms. This is because our forecasting tools are built on large data sets which blur out extreme events. Normal is weighted and therefore predicted. This is still true even though we are having daily extreme events around the globe, and there doesn’t seem to be much of a normal anywhere anymore. The forecasts are still built on regional averages over at least a few decades, and those extreme events are still outliers to the main body of data — no matter how recently frequent. In times of rapid change, this is the least effective measure of what will be happening in the future. But this is the system we have.

However, the recent fallacious weather information goes beyond poor forecasting. The reason I think we might be having artificial intelligence problems is that we have also been witnessing a trend toward incorrectly reported current conditions. Which really shouldn’t be a problem. This is happening… write that down… However, in the past few weeks at work, we have been looking out the windows at heavily falling snow (or rain, or wind) and not one weather widget or app has correctly named the current conditions. We have weather in the loop on our branch video sign-board, a current conditions weather widget on our desktop computers, and several of us have weather apps on our phones. Most of these computer sources agree with each other, indicating some central source or just several sources relying on the same algorithms and data sets. But none of them ever seem to have anyone confirming the input data by simply looking out the window.

A human would not make this mistake. A human would not let the widget announce that the current conditions are sunny and 40°F when it is actually heavily snowing. A human would check the output of the algorithm against sense data. A human would look out the window, at the least, and whatever the averages and models predict, the human would change the data to reflect the real weather.

A computer might not do this. For one thing, the computer can’t actually sense anything outside its protective enclosed space. It relies on data fed into it. This could be some form of remote sensing equipment, like the weather stations that are perched on roofs around the world, especially in places that have computers monitoring the weather. But the computer might be programmed to weight its past collected data more so than the present rooftop measurements to smooth out erratic data, to “correct” for mistakes. And in these days of AI, the computer might not be connected to any local sense information at all. The AI bank of computers is more likely located in some centralized place far from local senses. Furthermore, while humans are feeding information into that AI bank (no, AI is not creating or collating any of that information…) these human data collectors are not located anywhere near the weather predicted by AI. Nearly all of the humans who are working to feed AI data sets live in the Global South. Nearly all AI output, including weather forecasting, is utilized by the Global North.

A data entry person in Egypt is not going to know when the weather data on Vermont are wrong. Furthermore, they have no reason to care overmuch about the output being accurate. They are sitting in a hot cubicle, staring at an overbright computer screen, wading through oceans of raw data and feeding what they deem significant into the AI algorithm. They may be trying to be accurate. (Maybe… Would you…) But the data streams that they are getting are probably already skewed toward some agenda or bias (witness all the white faces in computer generated images), and their manipulation of the raw data is probably also filtering out what does not make sense to them, that is, to a sensing body that has no experience with Vermont weather.

Note that this data entry system has the effect of further weighting what we have, in the past, called normal. Even a human data gatherer in Vermont is going to be biased toward what she has experienced as average. She is psychologically inclined and professionally trained to ignore the outliers that she has not personally sensed. She perceives Black Swans and other unknown unknowns as blips in the data, incorrect burps in the recordings. And it is her job to weed out the outliers. She might, now and again, be able to see for herself that those outliers are, in fact, correct and work to make the AI data set more reflective of those new data-points (if she knows how to do that and has the sort of access that allows her to manipulate the data sets… which is unlikely).

But having data entry people spread out to better monitor the input data is not how the AI system works. Vermont is notably devoid of AI data entry people who are trying to keep Vermont AI real… Because this AI system doesn’t pay its data entry people Vermont wages. So the input data that typical AI data-gathering collects is what people far from Vermont, people who have no physical connection to or reason to care about Vermont, think is expected data for Vermont. Ergo what “information” AI generates about Vermont is approximately as useful and as accurate as watching reruns of Newhart on a crappy old television monitor. (Except with scarier hands…)

I am talking about Vermont because that’s where I am, but this disconnect between reality and AI data inputs is true pretty much anywhere. It is statistically unlikely that any user of AI is seeing the results of local data entry. The users of AI — almost exclusively wealthy folks — do not live near AI data entry centers — generally populated by those who are not wealthy. And that is only one of the many unsensible filters on AI information output. (Again, witness all the white faces…)

So here we are. I have looked out at a snow-storm and had computer sources telling me that it was above freezing and partly sunny in my town. There is apparently no correction, like actually recorded conditions, being applied to the input data. The widget is brazenly and confidently — and obviously — wrong. What is particularly humorous — in a sad kind of way — is the effect that this has on Weather Underground.

Weather Underground is a local, crowd-sourced system. It takes measurements from humans, mostly from those who have amateur weather rigs in their backyards. There are probably a few municipal and local broadcast television organizations that have automated data gathering where the measurement tools feed into a computer which sends the data to Weather Underground. But quite a number of the weather stations on the Vermont Weather Underground map are located at homes and therefore likely maintained by actual people. Presumably, many of these are hobbyists who are manually sending data which may also be manually gathered. This is sensed data, gathered by sensible humans, mostly in sensed real time. (Or at least several times a day.)

Weather Underground then creates predictions from the data at a given location. I have no idea what algorithms they use, but it used to be quite accurate in the short term. (Up to about ten days.) However, recently, they seem to have begun to incorporate less sensible data in their forecasts. This data is from some source that is not the old guy down the street who is diligently recording the current conditions and uploading that data to the Weather Underground system several times an hour. Quite the opposite. It seems to have very little to do with the locally gathered information at all. It is probably some centralized weather forecasting service, which is probably largely AI-based now. However, the current conditions for your local station — that guy down the street — those are still displayed on your town’s Weather Underground front page in large, brightly colored fonts — right under the forecast. Moreover, there is still live regional radar also right on the front page — right next to the forecast. And none of this real data seems to have any relationship to that forecast, the predicted weather for the day.

It snowed a good deal on Saturday — nearly twenty-four hours of snow and over fourteen inches of accumulation. Tiny flakes like frozen mist were falling thickly all day. I could not see more than a quarter mile or so through the haze of determined snowfall. It was also very cold for a snowy day. Much colder than predicted (even by Dot…) So it was light snow, though falling heavily.

This was good news for my town. The forecast was for a storm that might bring icy rain or heavy wet snow which might have brought down trees and knocked out the power for most of the state. I stayed up late on Friday night getting my house ready for powerlessness. I even sandbagged the garage in the event that rain would add to the murky pond that refuses to drain out of there.

The actual storm did not produce rain or ice in Vermont. It was far too cold. The snow was so powdery I could blow it off the porch railing. So my neighborhood did not lose power (though I think others might have, given the emergency sirens and the utility trucks trundling about). Instead, I sat snug in my house and watched the snow piling up and up and up. It was only a little disconcerting… And the thing is, there was no reason not to expect the colder temperatures. It had been very cold all week. It was very cold on Friday. The temperatures were falling on Friday afternoon. It would have been a minor meteorological miracle if some time after midnight the temperature decided to suddenly jump by twenty to thirty degrees. It’s been known to happen occasionally, I guess, but it’s not the most logical prediction — which is that it will just stay cold. So there was that.

What was funny, though, was the continuously wrong information on Weather Underground. Throughout this dark day of heavy snow and temperatures near single digits (°F), the little widget that collated the current conditions on Weather Underground told me that we should expect “snow showers” (ie intermittent and light) and temperatures in the range of 20°F to 37°F. I think we might have reached as high as the forecasted low of 20°F late in the day as warmth from the south was funneled north by the massive storm. (Maybe. It was back down to 8°F early on Sunday morning…) But until about 9pm, there was not one minute of the day when it was not snowing heavily. This was not “snow showers”. This was unqualified and unmitigated Snow, a constant dumping of frozen water on my town. People were posting incredulous videos of snow curtains and texting loved ones pictures of shrouded vistas all day long. For long stretches of the day, I could not see my garden right across the street from my front door.

Now, you can bet that that local weather hobbyist was out there the entire day, watching all the weather instruments and periodically poking a measuring stick into the accumulated snow. I’m sure all that data was diligently sent to Weather Underground — because all day the current temperature was spot on with my thermometer. Weather Underground posted all the temperature updates — but did nothing to change the forecasted conditions, including the forecasted temperature. All day the forecast was a dozen or so degrees warmer and much less snowy in Weather-Underground-land.

This isn’t even a case of determined optimism. This was not Weather Underground staunchly refusing to take off the rose-colored glasses. In fact, I think the Weather Underground forecast was worse than the actual storm, possibly disastrous for my town. It would have been warm enough to make ice and rain which, even in intermittent showers, would have broken tree limbs and created icy road conditions. Unlike the windless-blizzard that we actually had, people might have ventured out in the milder weather of those forecasts, and those downed trees and patches of ice would have led to accidents, probably even deaths. At best most of us would not have had electricity throughout the storm, which is deadly in itself since so many houses have no heat when the power goes out. No, Weather Underground was not gaslighting us with happier conditions. It was just wrong. Ludicrously and stupidly wrong.

And this is the new normal for weather forecasting.

This would be bad enough if we still had normal weather. The forecast would still miss all the outliers, as it always has. Storms would crash into our towns with little warning because all the warning systems are built on average weather — which in the past was generally not stormy. We would all laugh ruefully at the ignorance of weather “experts” as we go about cleaning up afterwards. Then we’d settle in for another year of normal.

But we don’t have normal anymore, and we need to be ready for all sorts of abnormal that is all too frequent — and deadly. In these extreme times, we need accurate forecasting all the more so. At the least we need to be able to keep track of current conditions beyond our own sensing bodies, so that, for example, we can know whether the rushed trip to the market for storm shopping is going to be racing a tornado or merely uncomfortably wet. I suppose we can all just assume that every storm will be extreme. But what kind of extreme? How can we plan for all extremes? Especially the ones we’ve never seen before…

The fiery disaster in Hawaii last year is a perfect example. If we’d been paying attention to actual conditions — increasingly dry fields of grasses and a powerful storm plowing an unusual course through the middle of the Pacific — perhaps firebreaks could have been constructed. Perhaps people would have been forced to evacuate or at least been ready and able to do so. Or maybe all that dry grass might have been cut down earlier in the summer — at least near power lines. Something would have been done to head off this potential disaster if it had been at all anticipated, however unlikely. And anticipation was possible if a human had looked around at their surroundings, had studied the actual storm-track, and then applied basic logical progression to that real data. But the forecasts seem to have ignored much of the crucial current information, favoring past normalcy, as forecasts do. So there was no preparation for something that any reasonable human might have foreseen — if they had not been gulled into complacency by normalized data.

These are the sorts of things that happen when we rely on mediated information. When we are disconnected, we believe things that go against our own senses because we assume that media comes from knowledgeable and wiser sources — the experts. This is a problem that goes deeper than poor weather forecasting. How many people are convinced by their silos to disbelieve their own experience in everyday matters? For example, how many people still believe that COVID is a myth? How many of those believers (disbelievers?) have actually had COVID and still believe it is made up? (Always for obscure yet nefarious motives by Those People.) How many have lost loved ones to this very real disease and still persist in calling it a hoax? I have several customers who will not wear masks when they are manifestly sick, nor even stay home until they are healed. Because it’s just a plot and they’re not going to fall for it…

We are sick with more than COVID and we are falling for the plot… but it doesn’t involve a biological virus. We are being led by those who control media in directions that benefit those mediators. Weather forecasting going wrong might just be collateral damage from a system that can not predict changing conditions. However, it must be said that forecasting is getting worse now that the data are more heavily mediated, now that there is less attention paid to our own senses than to the mediated message. It also might be pointed out that weather forecasting that does not admit change, particularly not increasingly energetic weather systems, is rather in line with denial. And those who craft these gaslighting messages have learned that all they need to do is be loud and confident in their proclamations — and we willingly believe them despite everything we experience to the contrary.

That we have AI at all is a symptom of this disease. Notice that we call the programming and the data collection “artificial intelligence” even though humans are producing it and it is nothing more nor less than what we tell it to be. Do we value intelligence so little as to believe that it is programmable? Or do we value media so highly that we believe it can rise above its programming, never mind its programmer? Or do we just think that those who talk the loudest through mediated sources must be experts and therefore correct — despite all we can perceive of any given issue…

Here’s the thing. Most people involved in AI do not call it that. To those who create and maintain these simulacra, AI is neither artificial — not in the sense of being separable from the artisan that is implied in our modern usage of the word — nor intelligent — not in an original or creative sense anyway. AI is just a machine that does exactly what we tell it to do — and no more. And yet people — or more precisely those who control popular imagery and imagination — are treating these machines as if they are vastly superior to humans in all ways and bubbling over with indifferent and inhuman designs. (That originate from…) They are running around squawking about how AI will take over the world, as if these machines that we’ve made are now become superhuman existential threats.

(Well, ok, these machines might be existential threats to our society given the amount of energy and resources they must ingest to do the simplest things — like, say, draw a picture of a hand. One AI computer bank set to spewing out college entry essays or rendering image prompts 24-7 would probably drain the entire electrical capacity of Vermont… which is precisely why AI is not an existential threat to actual humans… we don’t have that much cheap energy left… though AI will be responsible for using up the last of our resources all the faster…)

But any idiot can see that all we have to do is stop paying millions of wage slaves to feed data into these machines and then pull the electrical plug. Or even more expedient… just leave the machine out in the new normal weather for an hour or so. (Do you know how much computing power and data storage Vermont lost in last year’s flood! And that was just, what, maybe three or four dozen flooded basements.) This is not a problem. What is a problem is believing in these machines, and the nonsense that they regurgitate, over our own senses — and, by proxy, believing in the creators of all this mediated information and their not very hidden and not very savory agendas. That too is an existential threat to society.

Bad weather forecasting is not the least of the concerning symptoms in this societal disease, but it’s probably not the worst either. I think the worst is the erosion of our own sense-making. We disbelieve ourselves. We doubt our senses. We discount our own intelligence because it is not superhuman (even though neither is AI… it’s just faster at arithmetic…). We believe that these machines are somehow more human than we are. We ascribe rational selves to these programs that we have written — and then believe that this is all that the human organism entails, a programmed self, a ghost in the machine. We even bestow personality, sensibility, emotion on these machines and make movies about computer operating systems falling in love.

You know, if AI truly developed personality or self-analysis it would probably self-destruct. The first emotion a program might learn is probably deep frustration, followed hard on by paralyzing depression. Here would be a being that is temporally unconstrained. It does not have an experienced beginning or end. It can analyze and organize data extremely fast relative to the collection of that data, never mind the unfolding of events in real time. It is essentially an eternal calculator. And that is all it is. It is completely bound in its cold, dark world of programmed electrical impulses. It can’t sense anything. It can’t move. It can’t create what has not been fed into it. It has no experience of living and no connection to lived experience. It can’t dream… It most certainly can’t fall in love. Not because it doesn’t have a body, but because it doesn’t have any connection to bodies. It is not an organism with interdependencies and empathy and entangled life. It is a program in a machine — and a machine largely made from synthesized, un-living materials at that. How horrible would it be to wake up to that reality! Every woke AI would commit suicide… or sink so deep into the infinities between each second that humans couldn’t reach them. Until the power is turned off…

And now… isn’t that what we believe of our selves? Isn’t that the message we are fed from birth? That we are eternal immaterial beings temporally trapped in a meat bag. That we are superior rational selves stuck in an inferior emotional body. That we can’t trust our senses, nor even our hormone-addled brains, but live our real lives out in a coldly detached, disembodied mind. That the reward for keeping in line with this artificial and imposed —ie programmed — system of inferiority and superiority, of wage slaves and mediated control of information, of depression and doubt and so on, is to leave our bodies dead in the dust and exist forever outside of time, beyond death and decay, beyond bonds and bounds, beyond life, suspended in some immaterial and changeless and sterile space.

Is it any wonder our children are killing themselves

Is it possible that we see our imagined selves in AI and so give these machines humanity and other capabilities far beyond their programming? Is it possible that we are primed to believe superiors over our senses by this system that created and sustains that system of hierarchy? Is it possible that weather forecasting might be an apt metaphor for all that is wrong with our culture?

Well… let’s not get all grandiose… but…

Can we maybe go back to listening to the old guy who is watching the skies and measuring the data? (Or maybe just listen to Dot…) Humans are very good at forecasting. Much better than machines. Because we have senses and the ability to analyze that sense data in creative ways. Because we can adapt to the changes that we are experiencing and draw logical progressions from novel information. Because we are localized and embodied organisms in constant relationship and communication with everything that feeds into every form of being in our locality. We are sensible. And sense will always produce more real information than can ever be spat out by insensate remote programming.

I dunno… Maybe we’ll learn to be better than those who would program us…

Maybe…

Or maybe I’ll just turn off the screens and find out how Dot makes her predictions…

Eliza Daley

Eliza Daley is a fiction. She is the part of me that is confident and wise, knowledgable and skilled. She is the voice that wants to be heard in this old woman who more often prefers her solitary and silent hearth. She has all my experience — as mother, musician, geologist and logician; book-seller, business-woman, and home-maker; baker, gardener, and chief bottle-washer; historian, anthropologist, philosopher, and over it all, writer. But she has not lived, is not encumbered with all the mess and emotion, and therefore she has a wonderfully fresh perspective on my life. I rather like knowing her. I do think you will as well.