starting strength gym
Page 13 of 66 FirstFirst ... 311121314152363 ... LastLast
Results 121 to 130 of 659

Thread: Commentary #6: Global Warming

  1. #121
    Join Date
    Jan 2019
    Posts
    2,377

  2. #122
    Join Date
    Jan 2023
    Posts
    155

    Default

    adventures in chart-crime: making climate data look scary

    Another excellent dive with the cat on deceptive charting.

  3. #123
    Join Date
    Nov 2009
    Location
    South of France
    Posts
    3,057

    Default

    I made my bed, now I'll have to sleep in it...
    Obligatory captatio benevolentiae first: nothing of what you are going to read is an original production of my mind. I am just a nobody who reads, takes notes of things of interest and occasionally puts them together in a (hopefully) readable form, and that's the extent of my contribution.

    With that said, let's go. I am going to start with this ten year old Tweet (https://twitter.com/BarackObama/stat...477296988160):

    <quote>
    Ninety-seven percent of scientists agree: #climate change is real, man-made and dangerous.
    <unquote>

    I am not going to discuss the figures, it's already been done ad nauseam. What is interesting for me is that, from a scientific point of view, the only correct answer to this statement would be: so what?

    Science is not democratic, we've heard this many times in the last few years. The statement is correct, but not in the way most people believe. Science is not democratic because, when establishing the correctness of a theory, consensus counts for nothing. To give a famous example, the geo-centric view of the universe remained the consensus for centuries, and with way more than 97% approval. When Ignaz Semmelweiss suggested washing hands before performing surgery, his ideas run bang against the consensus of the time, and he was interned into an asylum for his troubles. More recently, the prevalent view that an accumulation of amyloid proteins causes Alzheimer has been questioned very seriously, after it emerged that the article that first proposed the theory was based on dodgy (maybe even fabricated) data.

    And let's not forget that a lot of genuinely innovative, ground-braking scientific advances have been made by individuals who went against the consensus, the received wisdom; it stands to reason that, if you follow what everyone else is doing, you are unlikely to discover something new.

    The peer-review system that should guarantee the correctness of published articles is clearly failing to deliver(https://experimentalhistory.substack...-peer-review); emphasis on bibliographic data like the H-Index skews research priorities (https://arxiv.org/pdf/2001.09496.pdf); sticking one's neck out with an unusual theory, or a critical review, might result in losing vital funding. I am not going to discuss these (or other) mechanisms in detail, but it's important to remember that they exist, and that they reward conformism and obedience to the accepted wisdom. They can also promote the selection of individuals who might put science at the service of other agendas (here a recent, glaring example: https://twitter.com/laurahelmuth/sta...52315032698883).

    As a result, even in these modern times it's possible to develop consensus around wrong theories.

    Most people who negate the democratic nature of Science do so in order to shut out and repress dissent (what else do you expect, if you transpose a political concept into a foreign context?). This is bonkers. Science is a process, not an institution; you don't believe in science, you question it.

    And yet, an increasing number of people share the former point of view, and help (maybe unwittingly) transforming Science into a secular religion, complete with dogmas, heretics and Inquisition tribunals. This process has some interesting consequences (and some truly scary ones, as you can imagine). Faced with natural disasters, society used to look for possible mistakes, and begin remedial and preventative work; nowadays, local administrators simply blame GW, as if it was a malevolent divinity. Of course this is very convenient, as it exonerates them from any responsibility; who, after all, could fight against a god? At the same time, it represents a stunning regression to a distant, superstitious age.

    In the case of GW theory, this quasi-religious devotion to dogmas also seems to be bringing with it an unbalanced focus on avoidance, at the expense of mitigation efforts. I mean, if you were really convinced droughts and floodings were to become more frequent, you would probably start building reservoirs and reinforce levees. But this does not seem to be a priority, and most efforts are directed towards convincing whole societies to perform questionably effective gestures, which seem like a sort of ritual sacrifice, or declaration of allegiance.

    We'll see later that given this context, GW science shares at least one important feature with religion proper. But for the time being, it's important to restate the main points. Consensus is not very useful for scientific endeavour; it might, and usually does, orient the efforts of scientists in a certain direction of research, but can say nothing about its correctness.

    When it comes to human relations, conforming to the prevalent view brings evolutionary advantages, even when that view is plainly irrational. Even in thoroughly modern contexts like financial markets, participants are better off being fashionably wrong, than unfashionably correct.

    But capital 'S' Science is supposed to do away with these animal-level instincts, and to elevate itself to a higher plane of consciousness and rationality. Instead, the level of the scientific debate on very important issues (remember CoVid?) has regressed back to sectarianism and blind faith, and everyone who points this out, or tries to offer an alternative view, is treated like a pariah.

    I think this erosion of the basic principles of scientific investigation is going to have enormous and pernicious consequences.

    IPB

    (1/3)

  4. #124
    Join Date
    Jan 2019
    Location
    NT, Australia
    Posts
    192

    Default

    Oliver Stone's new documentary, Nuclear Now, is a bit disappointing in terms of how much focus they put on nuclear 'solving' climate change - never mind that it just makes logical sense.

    Granted, it did a reasonable job at summarising how much more efficient nuclear is compared to solar/wind and pointed out how little waste is produced, so if it converts a few renewables-cult members, I suppose it's a good start.

  5. #125
    Join Date
    Nov 2009
    Location
    South of France
    Posts
    3,057

    Default

    A non-rigorous interlude about models;

    Models, and their predictions, are the core of the GW narrative; therefore, I think it makes sense to spend a few words about them.

    Let's hit the ground running; history is littered with examples of models informing very important decisions, which also turned out to be very wrong. Think LTCM (one of the founders was the co-inventor of the option pricing model, FFS!) for example; or think the Imperial College catastrophic predictions regarding CoVid deaths. Why should climate models fare any better?

    Ok, this might be too broad, so let me narrow it down a little: there are no models capable to reliably predict the weather in Tuscaloosa in two weeks' time; but, somehow, climate models are able to plot the temperature of the Northern Hemisphere thirty years hence, to the tenth of a degree?

    This is probably still a bit gratuituous, but maybe not as much as it seems; it's not that solid either, as we will see.

    One of the main features of weather and climate models is that they are incredibly sensitive to the initial values of their variables ( I think Barry Charles wrote about this a while back). In simple, imprecise words, if you launch two simulations of the same weather model, with a little difference in the value of one of its inputs, the results will start to diverge, even if the difference is very small. And if you wait long enough, the difference between the two results could eventually grow as large as you want. This is called the Lorenz (from the meteorologist who first observed it) or butterfly effect. Regardless of the name, it does affect the use this type of models.

    For a start, even if your model is correct and complete (which it can't be, otherwise it's not a model, it's a copy of reality), you still can't use it to predict the future (beyond a small timescale). Let me explain.

    Imagine that some supernatural entity gives you the perfect model for Tuscaloosa; it includes all variables affecting the local weather, and their mathematical relationships are correct. To know if it's going to be sunny in two weeks' time, you just need to put in some initial values and turn the crank. Wonderful, right?

    Well, it would be, until you start looking at the "initial values" part. Take for example temperature: what is the temperature in Tuscaloosa now? You look at your mercury thermometer and read: 75F. Uhm...it's actually a bit more than that, the liquid clearly reaches a bit higher, so you take a digital instrument and measure again: 75.2F. Is this enough?
    Remember: if you launch a simulation with a starting value of 75F, and another with a starting value of 75.2F, the two simulations will evolve differently, even if the model is perfect; it's inherent in the model itself.
    So you start using more sophisticated instruments, and read the temperature with four, seven, twenty significant figures. But it's still in vain; the actual temperature in this very moment is defined by a real number, with an infinite number of digits, and you can't input that (I'll leave aside Nicolas Gisin's argument that real numbers are not physically meaningful). Whatever number you are going to use, it's not going to be precise enough (and we haven't mentioned errors from the instruments...), and that means that, over time, your model will diverge from what actually is going to happen.

    Ah Aa! I knew it, models are useless and GW is a bunch of poppycocks!

    Not so fast; models can still be useful. After all, people in Tuscaloosa and all around the world DO rely on weather forecasts, and with pretty good results. The key here is the timescale. Weather simulations with minutely different initial values will eventually diverge; but in the short run, their results (or trajectories, if you want to impress your audience) will stay close together, close enough to be useful as forecasts. And that's roughly how they are used.

    You take your empirical, imperfect Tuscaloosa weather model, and assign a range of values to his variables; for each combination of values within the ranges, you launch a simulation. You plot the result over time of all these simulations. Then, at each time point in the future (twelve hours, one day, two days, one week ...), you see where the trajectories aggregate.

    At 12h after time zero, probably over 95% of all results are close to a certain state, and you can take that to be the correct forecast, with quite some confidence. At one day after time zero, results will be a bit more dispersed, but still bunched up enough to give a useful forecast. At one week, results will be quite scattered, and your forecast will only have low confidence. Over one week, you probably can't make any useful prediction because trajectories are all over the place. So, your model is quite accurate for the short term, maybe up to a week, and then it becomes increasingly useless.

    I don't think there is any reason why long-term climate models can't be just as reliable, at a certain timescale (and spatial resolution, of course). If, for a given range of macro-atmospheric values, over 95% of simulations aggregate around a certain outcome at the thirty years time point, why shouldn't we have confidence in this result?

    I don't know the answer to this question, but I can speculate on it.

    Models are useful because they simplify a complex reality, and that makes them attractive. They also depict complex relationships in clear, unambiguous terms, another very attractive feature. But, ultimately, they stand or fall according to their ability to match reality.

    The current Tuscaloosa weather model is the culmination of a long evolution; it has been tested, and refined, every day, for the last few decades. Its ability to offer useful forecasts has been verified against thousands of data points (and, despite this, it's still useless for anything longer than a couple of weeks).

    Can climate models claim the same? Their purpose is to predict the climate thirty years out, but how many thirty-years periods have they been tested against? In general, if you are modelling a phenomenon which happens on a timescale T, don't you really need multiple periods of length T to test it properly?

    Climate scientists and their models have predicted extreme events for quite some time; a lot of those events did not happen (in the 70s a sizeable chunk of the scientific community feared a new Ice Age), or if they did, not with the same magnitude that was forecast. This seems to suggest that the climate models used to make those predictions were not very good, and needed tweaking. Which is perfectly fine, of course, but every successive modification needs to be retested against reality, and it takes time to do that.

    Yes, there is back-fitting, that is running a new model again on old data to make sure it can reproduce what happened in the past; the inference been that, if it can accurately predict the past, it will just as accurately predict the future. Is it facile to say that this doesn't seem to work in financial markets (and God knows how much money is invested in those models), so why should it work in this case? I am not saying it can't, but it will take some pretty persuasive arguments to prove this.

    Also, I wonder if these models will run into the same limitation as local weather models; that is, a limit beyond which their forecasts just are not useful anymore, no matter how much tweaking you do. Has anyone ever tried to determine this? And again, to determine that your model is not reliable beyond 50 years, should you not wait at least 50 years? Again, I don't know the answers, but I think it's useful to ask them; and if anyone actually does know, I would be interested to hear.

    I'll close with a reference to a wonderful (and concise) article by Sunetra Gupta. It talks about models, their charming properties, and the many subtle ways in which they can be misused. I think it's quite apt:

    Avoiding ambiguity | Nature


    IPB

    (2/3)

  6. #126
    Join Date
    Jul 2007
    Location
    North Texas
    Posts
    54,948

    Default

    Excellent post, IPB. See my new Commentary on Monday, about this very thing: Science vs The Science(tm).

  7. #127
    Join Date
    Jul 2012
    Location
    Los Alamos, NM
    Posts
    3,239

    Default

    @IPB

    Very thoughtful! Sorry if I’ve said this too much, borrowed from Feynman, “we have a lot of data about the past and very little about the future”.

  8. #128
    Join Date
    Nov 2009
    Location
    South of France
    Posts
    3,057

    Default

    Quote Originally Posted by IlPrincipeBrutto View Post
    And again, to determine that your model is not reliable beyond 50 years, should you not wait at least 50 years?
    About three seconds after hitting the send button, I realised that the above statement is incorrect. Worse, it's plainly contradicted by something I had written a few paragraphs earlier. I wish I had thought about it a bit more.

    IPB

  9. #129
    Join Date
    Feb 2020
    Posts
    2,439

    Default

    Quote Originally Posted by IlPrincipeBrutto View Post
    About three seconds after hitting the send button, I realised that the above statement is incorrect. Worse, it's plainly contradicted by something I had written a few paragraphs earlier. I wish I had thought about it a bit more.

    IPB
    I think the point there is that the weather is such a complex non linear system that the number of operations needed for forecasting rises really fast (maybe even “exponentially “, but the COVID models have made a mockery of this word), so anything longer than several days would require computing power that we simply can’t produce.

  10. #130
    Join Date
    Jun 2023
    Posts
    1

    Default

    starting strength coach development program
    As a Chicago native I'll note that cold kills roughly four times more people than heat, so global warming is good: Cold weather kills far more people than hot weather -- ScienceDaily

Page 13 of 66 FirstFirst ... 311121314152363 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •