have speed up the history data routine

like way faster
I identified that is the writting of the log files that is slow (as they are appended)
and so I now store the data to be written into a stringlist and then that is written out in 1 go at the end
much faster

I just need to work out how to allow for when the file name changes at the change of the month
then will need some testers (e.g Davis VP)

Is that only for Davis history data?

will be applied to other stations (currently also works for ambientweather.net here in testing)

i thought this would have been very popular but no one has said anything yet

Sorry, busy, I’ll check it out.

I have a VP2 and would test it. I’m not sure what this is or how it works. What file(s) are they?

I assume it means when data is downloaded from the logger and written to the data file after WD has been shutdown for a while.

its available now build 81 .zip update
(note default method in use if the data starts at the month prior)
currently works for Davis, weatherflow, ambientweather.net . I will add in wmr200 support today

you should notice a big difference

also this update adds ability to turn off anti alias for the graphs (so they look more like WD of years ago), see in the graph setup, extra options

update: test version for wmr200 /300 htttp://www.weather-display.com/downloadfiles/weatherdisplaytest.zip

Tested, before end of month. Copy of WD that hadn’t run for some days so it did full VP2 archive download. Load phase seems faster :thumbright:

First two lines of data (8 30) repeated at end of logfile :frowning:

31  8 2018 18 58 89.5  19 41.8 29.791 1.0 3.0 135  0.000 0.000 0.000 20.984 85.6 <- last correct downloaded data
30  8 2018  0 20 89.3  19 41.6 29.792 0.0 0.0 135  0.000 0.000 0.000 20.984 74.0
30  8 2018  0 21 68.3  69 57.7 29.908 1.0 2.0 180  0.000 0.000 0.000 20.984 73.7
31  8 2018 19  6 68.2  70 58.0 29.909 1.0 2.0 187  0.000 0.000 0.000 20.984 73.7 <- first live data

I had not noticed that particular problem in testing here

have done more testing here but can not find any problems

Hi Brian,

Thats great, will it work with my Campbell? Will it remove the delay of some seconds when the minute change?

By the way, the anti alias function for the graphs turning of is very nice, thanks alot😁

I’ll test the full archive download again in a couple of days (with September data).

@Asperitas
no it won’t change that (which does depend on what you have WD set to do every minute)

I just tested getting 18 hours of missed data
and the second part of that processes the data so fast now…its like done in seconds ( I do have a SDD though), where as before it would have taken minutes

Ok, thanks for your reply Brian😁

Brian

Worked really well with WeatherFlow this morning. Really quick after being shutdown overnight - THANKS! =D>

Cheers

:smiley:

MikeyM

Shut down WD for 30 mins and restarted after unzipping test file. Read history data OK and seemed to complete very quickly, BUT graphs not recreated and no history data logged at all. Not only that, but no live data was logged at all until I went back to build 79 (with only the .zip update file). I’ve lost an hour’s log data.

I actually don’t care how long it takes to process history data as long as I don’t lose any data - history or live. And you are well aware that I always lose live data when restarting WD. . .

but you were the one who posted that you wanted to know if this would work for the wmr200
so I added it
I will un do that

With respect, I didn’t. I asked on behalf of the wider community because you only mentioned Davis testers.

Another VP2 full archive test today.

Previous shutdown Sept 1st @ 16:55, now Sept 6th 13:00 - so full 42 hour archive will be imported.

Time to get data from VP2 @ 19,200 baud = 2:35 min:sec
Time to load to WD = 2:30
Total elapsed time = 5:00

Same issue with initial timestamps being duplicated at end of logfile, I’m guessing in the datafile too since the data anomaly shows on the graph.


 1  9 2018 16 54 98.0  9 29.5 29.682 3.9 6.0 315  0.000 0.000 0.000 20.984 92.8
 1  9 2018 16 55 97.9  9 29.5 29.681 2.9 5.0 339  0.000 0.000 0.000 20.984 92.7 <- last line of old data
 4  9 2018 18 30 97.8  9 29.4 29.680 2.0 3.0 158  0.000 0.000 0.000 20.984 91.1 <- first line loaded from logger
 4  9 2018 18 31 95.0  18 44.9 29.683 1.0 2.0 158  0.000 0.000 0.000 20.984 91.1 <- this timestamp will appear again at end
 4  9 2018 18 32 95.0  18 44.9 29.685 1.0 3.0 158  0.000 0.000 0.000 20.984 91.0 <- this timestamp will appear again at end
 4  9 2018 18 33 94.9  18 44.8 29.686 1.0 3.0 158  0.000 0.000 0.000 20.984 91.0 <- this timestamp will appear again at end
 4  9 2018 18 34 94.9  18 44.8 29.686 1.0 5.0 158  0.000 0.000 0.000 20.984 90.9
.
.
.
.
 6  9 2018 13  7 87.4  27 49.3 29.955 5.0 7.0 248  0.000 0.000 0.000 20.984 84.8
 6  9 2018 13  8 87.5  27 49.4 29.956 4.0 7.0 202  0.000 0.000 0.000 20.984 84.9
 6  9 2018 13  9 87.6  27 49.5 29.955 4.0 7.0 180  0.000 0.000 0.000 20.984 85.0 <- last good data loaded from logger
 4  9 2018 18 31 87.7  27 49.6 29.955 1.0 2.0 158  0.000 0.000 0.000 20.984 91.1 <- duplicated Sept 4th timestamp from start of download
 4  9 2018 18 32 95.0  18 44.9 29.685 1.0 3.0 158  0.000 0.000 0.000 20.984 91.0 <- duplicated Sept 4th timestamp from start of download
 4  9 2018 18 33 94.9  18 44.8 29.686 1.0 3.0 158  0.000 0.000 0.000 20.984 91.0 <- duplicated Sept 4th timestamp from start of download
 6  9 2018 13 16 94.9  18 44.8 29.686 1.0 3.0 184  0.000 0.000 0.000 20.984 91.0 <- start of live data
 6  9 2018 13 17 88.9  25 48.6 29.686 1.9 3.0 130  0.000 0.000 0.000 20.984 85.9
 6  9 2018 13 18 88.1  26 48.9 29.874 1.6 5.0 130  0.000 0.000 0.000 20.984 85.2

Note: Updated to reflect that the timestamps are duplicated but the data is not all the same :?