Some time ago, I devised a simple protocol for recording indoor air temperature and relative humidity levels in the Hawkins House, using what I often refer to as commodity dataloggers — relatively simple and inexpensive devices that measure and record these quantities, and from which you can download data to a computer. This past week, I implemented the same protocol at the Mansfield House.
Tracking how indoor temperatures and relative humidity trend over time is important, because it provides a good idea of how a building envelopment reacts to external weather changes. (In fact, in my protocol, I also take an outdoor recording, as well, so as to profile external weather conditions). For example, it was by following these metrics over time, and through a variety of weather changes, that I concluded the cellar at the Hawkins House was the dominant contributor of indoor humidity, even more so than the infiltrating outside air.
What’s nice about these commodity grade devices is that they’re sufficiently inexpensive so that you can instrument an entire house with them. For example, one on each floor, and maybe one or two in specific, suspected problem areas. The downside is that these devices are not at all integrated. Not even at the level of the reporting software that they’re supplied with. (Of course, the more expensive, higher-end solutions I’ve looked at aren’t nearly as well integrated as I’d like, either). But they’re affordable, and can export their data in Excel or straight ASCII format, which is what really matters to me.
So, in a nutshell, here’s a summary of my protocol:
1) Commodity grade dataloggers are not officially certified for accuracy, so I informally check the accuracy of each one against my NIST-traceable, calibrated psychrometer, for a few short test runs (usually ten to fifteen minutes — and yeah, you can measure this time interval with a kitchen hour glass, if only for the arcane symbolism), at several different temperature and humidity levels, if possible. As long as the datalogger’s measurements are reasonably close to where they ought to be (within two or three or four degrees or percentage points of the psychrometer’s), I’ll go ahead and use it. Otherwise, if the datalogger is way off, I’ll return it (I’ve only had to do this once, actually). Absolute accuracy isn’t as important here as average trends, and the use of multiple devices will introduce a degree of cumulative error in the final results, anyway.
2) I designate one particular datalogger as the lead, or baseline device. This is the device that tracks outdoor conditions, and establishes the time dimension for the others. Currently, I’m using Extech’s RHT50 for this, since it’s the only datalogger of its class I’ve found that records ambient air pressure. To record external conditions, this datalogger needs to be “outside” but also protected. At the Hawkins House, I keep the lead datalogger mounted on the mudroom wall, near, but not too close, to a partly opened screened window. The Mansfield House is a different story; there’s no such convenient location. So for now, I’ve mounted the lead datalogger on the lower casing of my extremely leaky rear door (we’re talking “visible light entering” leaky). Soon, I’ll address the door’s issues, but for now, this is where the baseline datalogger goes.
3) The remaining dataloggers (each is an Extech RHT10, which only records temperature and relative humidity) are assigned to specific locations throughout the house. One definitely goes in the basement, but not too close to the heating appliances. I also try to have at least one datalogger on each floor, usually in some fairly central location and sufficiently distanced from any hot air registers or incoming sunlight. It’s also important to track humidity levels in the attic (even if not targeted for inclusion in the conditioned space), so the attic gets a datalogger, too. I mark each datalogger with a small label indicating its location to avoid any confusion, and also to ensure consistent measurements per location.
4) All of the dataloggers are then programmed for the same sampling interval and number of samples; for example, 5000 samples, each 15 minutes apart. Unfortunately, these dataloggers can’t be started programmatically; each requires a button push to get it going. So I gather all the dataloggers together in one place, start the lead (outdoor) datalogger first, and then quickly start all the remaining dataloggers. As long as all get started within one or two tens of seconds of the lead, then as far as I’m concerned, they’re all conformed to the time series of the lead datalogger. Then, I take the dataloggers to their respective locations. Since I routinely discard the first hour of samples to eliminate “startup transients” anyway, starting the dataloggers away from their target locations is not critical.
5) Finally, I’ll gather the dataloggers together after their sampling period has expired, or even sooner if events warrant (like after a severe storm whose effects on the house I’d prefer to see sooner rather than later). I’ll download their data files, and view the report on the lead datalogger first, noting any interesting conditions or trends. Then, I’ll view the reports for the other dataloggers for those same points in time. I’ll also archive the data files, and note in a spreadsheet log book the measurement period, maximum and minimum atmospheric pressure, temperature, and humidity level, respectively, and any significant happenings (like severe storms, prolonged rain, or consecutive days of clear weather with unusually low temperatures and humidity levels). Then, I’ll reset and restart the dataloggers, and begin the measurement process all over again.
I’m currently contemplating writing my own programmatic script to reduce and combine the collection of exported data files, aligning all the samples along the same time axis (that defined by the baseline datalogger). Once I’ve done that, I’ll drop the reduced/normalized data set into some graphical reporting package. It would be really nice, for example, to be able to view all this information in the form of, say, several, stacked three-dimensional contours, rather than just as a set of super-imposed, two-dimensional curves. But that’s a project for another day…
Twitter buddy and fellow wood worker and home energy guy John Nicholas and I had an exchange regarding event dataloggers that record and timestamp certain types of discrete events (for example, the turning on or off of a low-voltage device, such as a thermostat), rather than ambient conditions. We found a nice example of one here. It’s very similar in form to the USB dataloggers I’ve been using for pressure/temperature/RH. Expect some postings reporting on software-level integration of event dataloggers into my overall home environment monitoring solution in the near future.