tag:blogger.com,1999:blog-27584252.post7579629652281905571..comments2021-05-04T21:08:37.587+10:00Comments on Boy on a bike: CRU emails - some sleuthing required. Help!Unknownnoreply@blogger.comBlogger2125tag:blogger.com,1999:blog-27584252.post-51571465729433951822009-11-25T21:53:43.639+11:002009-11-25T21:53:43.639+11:00---continued ....
Just picture these poor fellows...---continued ....<br /><br />Just picture these poor fellows working with this data trying to get it to behave in a particular fashion to produce a predetermined result and the bloody data goes the other way! LOL There must have been many sleepless nights. Funnily enough having taken the trouble to think this through a little I have a better understanding of the manipulation that has been evident by the computer programmers remarks about their code in some of the other information . These people have completely emasculated the data so that in my opinion the whole lot would have to be pretty well worthless. And funnily enough in wondering where to from here, I used Google to search for satellite records global warming and up came the late John Daly’s web site <a href="http://www.john-daly.com/" rel="nofollow">Still Waiting for Greenhouse</a>. And scrolling down the page there is an entry “The Satellite Record 1979-2006" where he comments, “The newest and best way to determine global temperature is to use satellites to measure the temperature of the lower atmosphere, giving the Earth a uniform global sweep, oceans included, with no cities to create a false warming bias.” He’s right of course. Any other temperature data has been so compromised as to be worthless and should be discarded. So we can forget about AGW (what about all the AGW jobs?).<br /><br />But back to the statistics. As a suggestion how about asking for assistance for <a href="http://www.numberwatch.co.uk/" rel="nofollow"> Numbers Watch man, </a> <a href="http://users.ecs.soton.ac.uk/jeb/cv.htm" rel="nofollow"> John Brignell</a>. From everything on his blog and given the importance of the material, who knows he may be willing to advise.<br /><br />CheersWandhttps://www.blogger.com/profile/13784695856838507417noreply@blogger.comtag:blogger.com,1999:blog-27584252.post-29880509196051543152009-11-25T21:52:45.225+11:002009-11-25T21:52:45.225+11:00First comment - both together give error ...
What ...First comment - both together give error ...<br /><i>What impact did moving from a Type I to a Type II error criterion have?</i><br /><br />BOAB, you need a good statistician to give you the answer. It’s a long time since I did any stats and when I did I must admit I didn’t like them too much - but for the heck of it I looked up my Introductory Mathematical Statistics by Kreyszig which used to be the standard university text (at least years ago). Here are some links to scanned images of the first 6 pages of Ch 13 that deal with the topic. <br /><a href="http://i329.photobucket.com/albums/l361/Onewandad/IntroStats_Page_1.jpg" rel="nofollow"> One</a><br /><a href="http://i329.photobucket.com/albums/l361/Onewandad/IntroStats_Page_2.jpg" rel="nofollow"> Two </a><br /><a href="http://i329.photobucket.com/albums/l361/Onewandad/IntroStats_Page_3.jpg" rel="nofollow"> Three</a><br /><br />Briefly statistics are used to test and disprove a null hypothesis (sometimes called the alternative hypothesis). This is because you cannot prove a positive (there are always exceptions) so you have to set up an algorithm that you then disprove. Once the null hypothesis is disproved, the hypothesis is then proved. And here comes the fun part - actually setting out what the particular hypothesis is to be. Once established we test the null hypothesis with a set of data (most likely temperature data). Let me guess that the null hypothesis would be something like this: The temperature data from xx sources over a certain time period would show no change (i.e., not warming) and then test for that by looking at the deviation of the individual results from the initial results (say some type of variance). Now if the variance was always different from the null hypothesis being tested - say a positive variance, then the null hypothesis would be disproved, ergo the temperatures are increasing. However, some of the results would be expected to be random (noise in the signal whatever) and so there needs to be a way of excluding these results from the overall analysis. So we then choose a level to allow for random noise (the percentage of data points allowed to fall outside the expected boundary for the null hypothesis to still be accepted - see fig 13.1.1) and my guess is that a 5% level would be used. (Error type II) When it come to a type II error it now gets more complicated as my understanding is that the Error type II is calculated from the sample size and the number of points discarded to allow the null hypothesis to be disproved. Actually it could be quite large. (I think I am right here but I am no statistician).Wandhttps://www.blogger.com/profile/13784695856838507417noreply@blogger.com