It’s not my habit to write product reviews, because I’m rarely in a position to say anything about any product that hasn’t already been amply covered by other professional and/or amateur reviewers. The Duo Pro R camera manufactured by FLIR for thermal imaging from drones will be my first exception.
The Duo Pro R [640×512 pixel model] camera is a rather expensive (list $7,599) specialty product that came out in late 2017. To date I haven’t found a hands-on review of the camera by any other user, let alone someone attempting to utilize it for scientific research. Yet I know from forum postings that there is considerable interest from at least some other prospective customers in its unique capabilities. So I’d like to share my experience in the hope that they will help guide others’ decisions.
This is a long post! If you want to skip the details, please jump to the conclusions for my bullet-point summary.
Disclaimer: I’m not a professional product tester or reviewer. The impressions I share below are based on my own early, limited, and unsystematic observations and may not be representative of other users’ experience. (I did send a message to a FLIR representative on December 4, 2018, inviting them to correct any erroneous or misleading information before I made this post public on February 26, but I received no response.)
We purchased the Duo Pro R (640×512, 45 degree model) in September specifically for an NSF-funded micrometeorology field experiment in Illinois called SAVANT. The Duo Pro R seemed to be an exciting new offering — a camera from the world leader in thermal imaging hardware, designed specifically for drones, that combines a professional quality (hence “Pro”) thermal infrared and RGB camera in a single convenient package (hence “Duo”). It also claims to yield true radiometric intensities (hence the “R”)—i.e., quantitative radiance values that could be directly related to absolute physical temperature provided only that one knows (or can empirically determine) the emissivity of the target and the magnitude of any necessary atmospheric corrections.
The combination of both RGB and thermal cameras in a single package seemed especially intriguing. In principle, this could allow overlapping RGB images from a drone flight to be used to retrieve precise camera positions and orientations in the course of photogrammetrically computing a surface elevation model. The coincident thermal images could then be accurately projected onto that surface model.
Our mission was to fly our fixed-wing Elanus Duo drone over a 100-acre study area consisting largely of harvested corn field, as well as some unharvested soybeans, and to map early morning gradients in the surface skin temperature. The thinking was that these temperature gradients, combined with the local topography, could be considered the primary drivers of the near-surface air density currents that SAVANT aimed to study.
The opportunity to participate in SAVANT—and thus to purchase both the drone and the Duo Pro R—emerged close to the start of the field phase in mid-September, so we had only a very short window to familiarize ourselves with both the drone and camera before leaving for Illinois.
In fact, it was preparing and testing the drone itself that occupied most of our attention, as we had never before flown a fast-moving fixed-wing drone on autopilot, and there was no room for either hardware or programming glitches. The thermal camera, we surmised, would “just work,” as long as it was pointed at the ground and taking images at the right times.
In the next few sections, I highlight the various technical challenges we experienced with the camera, roughly in the order that we experienced them.
Problem 1: Questionable triggering
Our original plan was to program the autopilot to trigger the camera at specified distance intervals during the flight, after it had begun its survey pattern. The technical setup seemed straightforward: have the Cube autopilot send a particular pulse-width modulation (PWM) signal via one of its output ports to the camera via a cable provided for that purpose, and configure the camera to be triggered by that signal.
The Mission Planner ground control software that talks to the autopilot via a telemetry link provides a means to ground-test that this mechanism has been set up correctly: simply click on an icon on the laptop screen to trigger the the camera via the autopilot and verify that the camera fired. And in fact, it fired—some of the time! But nowhere near 100% of the time!
The Bluetooth phone app for the FLIR Duo Pro R not only allows one to configure basic camera settings but also to monitor whether a trigger signal has been detected by the camera. On a number of occasions, I observed the camera reporting to the app on my phone that it was indeed seeing a PWM trigger signal on the correct channel but with no telltale beep and no image subsequently showing up on the SD card.
Interestingly, I found that triggering was seemingly more reliable if I configured the camera to interpret the incoming signal as a start/stop command for multi-image capture rather than as a momentary trigger for single-shot operation. Because I didn’t know why it was making this distinction (and FLIR tech support was baffled as well), I didn’t know whether I could count on this workaround in the air, especially as regards the timing of the images.
In the end, I couldn’t trust the camera to reliably respond to the autopilot’s triggers when in the air. It’s possible the issue was specific to Mission Planner’s software trigger command. But with less than a week to prepare for the upcoming field deployment, I didn’t have time for careful debugging. I decided that the safest plan would be to set up the camera for automatic one-second interval shooting (the minimum interval possible) and manually start it on the ground before launching the drone. The only real drawback would be the need to subsequently weed out hundreds of extraneous images.
Problem 2: Failure on power-up
Our first moment of real panic came just three days before departure, when our Duo Pro R suddenly began balking on power-up. Instead of going through the internal checks and emitting three beeps before moving to a ready state, it simply hung with a mysterious clicking noise coming from inside somewhere. It didn’t respond to the button controls, and it wasn’t reachable via the Bluetooth interface. This didn’t happen every time on power up, but often enough to give me grave doubts about counting on it in the field.
Fortunately, the third-party vendor that sold us the unit was very responsive and immediately FedExed us a loaner that we could take to the field experiment. Whew!
Except where clearly indicated, the camera performance described below pertains to the loaner, not the copy we actually own, which arrived back from repair some time after our return from the field.
Problem 3: File format
Up until this point, we had not been able to actually view any of the thermal images captured by the camera, though we could view IR imagery “live” by connecting the camera to an external monitor. The FLIR-recommended file format is a “radiometric JPEG” (or RJPEG) that includes both the RGB image and the thermal data in a single file. None of the run-of-the-mill software tools we normally used for opening and viewing image files knew how to get at the thermal data, though we could easily view the RGB images.
This was frustrating. We had to simply trust that the IR images were being successfully recorded (and were meaningful), and we especially had to trust that when the time came after the experiment, we’d find a way to retrieve the thermal data that were central to our participation in the field campaign. We were placated by the thought that FLIR was selling an expensive “pro” level infrared camera, and certainly they would have an interest in making the resulting data readily accessible to their end-users, regardless of the intended application.
In Illinois, the drone functioned beautifully despite my anxieties, and the FLIR camera dutifully captured nadir-viewing images from the belly of the drone at 1-second intervals. We had an afternoon test flight on October 15 following by a sunrise research flight on October 16, yielding 340 and 361 images, respectively, all captured in a typical rectangular “lawnmower” survey pattern from just under 400 ft. above terrain level.
Upon our return home, our attention turned to the critical problem of getting our thermal data out of the files and doing science with it. This proved far more difficult than I anticipated. No general purpose image processing utility we could find knew how to unpack the files so as to retrieve the embedded thermal images. No online sleuthing yielded anything useful.
I vainly imagined that with some clever Python scripting I could isolate the portion of the file containing the 640×512 array of thermal data and unpack it myself. Though I could indeed see roughly where the data were stored (just look for a long sequence of unsigned integer-16 values in the low thousands), I could not figure out how to separate the raw data from the sporadic interruptions by some kind of metadata. At some point it became clear that the thermal data were stored in a TIFF image that was in turn embedded in the file metadata, but beyond that, I was stumped.
No problem, I thought; I’ll just ask FLIR tech support for the technical details of the file format. No can do, came the answer; the RJPEG format is proprietary. Really? I’ve given this some thought, but I still can’t come up with a plausible business-oriented explanation for making it hard for customers to read the output from their own $7,500 camera. Certainly the technical advantage of combining IR and visible images in a single file does not seem large enough to warrant the secrecy.
I was informed that if I wanted to unpack the files myself, I could use FLIR’s own Atlas software developers kit (SDK), which can be used in either a Matlab or a .NET/C# environment. I don’t have a Matlab license and didn’t want to pay for one that I would otherwise never use, so I grudgingly installed the free .NET/C# environment. It quickly became apparent that .NET/Visual Studio and C# are both unfamiliar enough to me that learning to use them, even just to unpack thermal image files, would cost more time and pain than I was willing to accept. From what I could see, the Atlas SDK/.NET/C# combination really appeared designed for developing commercial Windows apps, not for those of us with custom one-off non-commercial analysis tasks in mind.
I had already discovered by now that FLIR seemed uninterested in making things any easier for me: in response to the aforementioned inquiry about software tools, they had responded with pointer to the Atlas SDK and a report-generation utility and, with their terseness, implied that I was otherwise on my own.
With more than a little annoyance given the time and money we had invested—not to mention our obligation to deliver a thermal product to our fellow scientists, I began to think of our collection of 700+ radiometric JPEGs as a write-only archive—straightforward to collect; difficult or maybe even impossible to utilize for its intended purpose.
Problem 4: The Pix4D saga
Just in time, it seemed, I discovered that Pix4D sold a commercial software package, Pix4Dmapper, that purported to work with FLIR’s proprietary image files. Best of all, it could automate the generation of high-quality orthomaps and even elevation models from a sequence of drone-captured images. Their support website gives the following advice:
Process dataset with both thermal and RGB imagery: Thermal cameras usually have much lower resolution than RGB cameras, and thus the 3D model is of much lower quality. The idea is to use the higher resolution RGB images to compute a detailed 3D model and to project the thermal texture on top of it. This greatly improves the final thermal 3D model.
Well, this is exactly what I had in mind all along: Thermal images mapped to a 3D model derived from the RGB images. Also, the same website mentioned the FLIR Vue Pro R camera as being supported. As I understand it, that model is just the thermal-only version of the Duo Pro R, so it seemed likely that Pix4Dmapper would have no problems dealing with the (presumably identical) thermal images from new dual-band camera.
It also specifically discusses FLIR’s radiometric JPEG images: “This is a proprietary image format that is supported by Pix4Dmapper. RJPG is the recommended image format for thermal images.” It looked like I had come to the right place. I finally had a path to getting a thermal product out of my collection of proprietary files, and a professional quality mapped product at that.
I hesitated, though, when I saw the cost of an academic license for Pix4Dmapper: $1,990. If I’m going to pay that much money, I thought, it had better work! Out of caution, I signed up for the 15-day free trial. Unfortunately, I didn’t yet have a local Windows machine to install Pix4Dmapper on, and the parts for the one I planned to build had yet to arrive.
Fortunately, there is also a cloud-based implementation of Pix4Dmapper available to those with a license (trial or otherwise). So I uploaded the 340 files from the first field day to the site and clicked “process.” To my amazement and delight, it quickly generated a very nice—and seemingly accurate—orthomap (top) and elevation model (bottom) from the RGB portion of the images taken during the October 15 (daylight) test flight:
But to my disappointment, I could find no on-line option to also process the thermal images. As far as I could tell, this feature would only be available in a locally installed copy of the full Pix4Dmapper software package, and for that I would have to wait for my new Windows machine. By the time I had that new machine up and running, my free trial had run out, so I swallowed hard, forked out almost two grand for the license, and expectantly installed the Pix4Dmapper software.
To make a long story short, Pix4Dmapper simply could not process the thermal images. It would just quit ungracefully after failing to “calibrate” the images.
There were apparently two parts to this failure, according to tech support:
- Incorrect default values of camera internal parameters stored in the camera data base (Really? Doesn’t FLIR share that information with the company that advertises support for FLIR’s proprietary files?), and
- Insufficient contrasting features in the thermal images to allow Pix4Dmapper to match points of overlap and perform the necessary transformations.
I sent my day-two image set to tech support for them to take a crack at it, and they were at best marginally successful at getting more out of the images than I was able to. By “marginally successful,” I mean the program apparently didn’t crash outright. Certainly nothing usable resulted, as can be immediately seen from the grotesquely distorted Frankenstein patchwork of images previewed in the quality report (compare with the RGB orthomap above):[How is it, by the way, that the Pix4Dmapper software doesn’t look at the GPS-derived geolocation information stored in the image metadata, compare it with the final computed camera positions taking into account the pre-specified uncertainties, and say, “Hmmm, this can’t possibly be right!” If I were Pix4Dmapper, I would have raised a fatal error and crashed just to avoid the embarrassment.]
Pix4D tech support said that to have any reasonable hope of success, I would need at least 90% overlap between images, both laterally and along the flight track, and I should image scenes with more contrasting features.
The first requirement was a non-starter on account of the 15 m/sec forward speed of the drone. Given the 1-second minimum sampling interval of the Duo Pro R, it would have had to fly at twice the FAA-mandated altitude limit of 400 ft. to achieve 90% overlap in the along-track direction. The second requirement was also a non-starter: our whole purpose was to measure gentle gradients of temperature in an otherwise fairly featureless farm field.
Also, the images had already been collected, and there was no possibility of a do-over.
By this time, I was truly despairing of ever getting anything at all out of these image files. But then, while poking around idly on my disk drive, I discovered that, despite its inability to do anything with our thermal image set, Pix4Dmapper had, in the initial processing phase, stripped out the elusive thermal image data and stored them in standalone, readable(!) TIFF files in a subdirectory called “converted.” Modest as it was, this was the breakthrough I was looking for.
Not to put too fine a point on it, but the sole productive role of Pix4Dmapper for us so far, as regards the thermal images at least, has been as an extremely expensive file unpacker.
An option on the FLIR camera that I had previously ignored was the possibility of saving images as separate RGB JPEGs and 14-bit radiometric TIFFs. But FLIR and (as I learned later) Pix4D had both strongly recommended using the combined RJPEG format. As a new customer reading the Duo Pro R user manual, I accepted that advice.
I shouldn’t have. I belatedly figured out that the standalone radiometric TIFFs are in a non-proprietary format, easily readable by standard tools. I will never use the recommended RJPEG format again. Of course it means that I’ll have to work with the raw sensor counts rather than black-box-computed temperatures, but as a scientist, that’s my preference anyway.
Let me reiterate an important point: given two options for saving images from the Duo Pro R, FLIR clearly and specifically recommends the format that gives you by far the fewest software options for working with the collected images, and none of those options seem satisfactory for those of us who want to make radiometric maps of the natural environment using a fixed-wing drone.
Problem 5: Inaccurate geolocation and orientation metadata
So in the end, I couldn’t use the full power of Pix4Dmapper to generate a nice thermal orthophoto, but at least I could finally get at the radiometric data from the field experiment. I was able to easily open the converted TIFF files in Python and access/display the numerical (floating point) pixel data.
By this time, I had abandoned the expensive Pix4D software and set out to write my own Python program to project the thermal images onto a horizontal plane and thereby generate a crude but usable mosaic. All I needed was the camera’s 3-D positions, its pointing direction, and its rotation.
In theory, the images are tagged with all of this information—in fact, the FLIR Duo Pro R is advertised as having not only on-board geolocation sensing (GPS and GLONASS) enabled by a dedicated external antenna for the camera, but also accelerometer, gyroscope, magnetometer, and barometer! All the sensors, in fact, that a drone requires to accurately update its own position, velocity, and orientation based an optimized blending of current states and recent history (why else would it need roll rate and acceleration?).
So my hopes were fairly high that I could use the information in the tagged images to get everything I needed for my mapping experiment. After all, I didn’t need centimeter-scale pointing accuracy for this large (600×900 meter) field.
Alas, the geolocation turned out to be not so great after all, sometimes jumping around by 5–10 meters or more on what should have been a steady straight-line flight. The camera’s GPS antenna was mounted on the top of the drone’s fuselage well away from sources of interference, so I don’t think antenna placement should have been a problem. Was the camera even using its on-board accelerometer?
Well, okay, I could still get accurate geolocations from the drone autopilot logs. While a little more work to extract, these proved to be far smoother—using its own similar suite of sensors—than the camera’s internally calculated values. [Time differences of several seconds between the drone clock and the camera clock were a temporary hurdle when transferring the drone’s geolocations to the images, but that’s another story.]
Camera orientations were another issue. Most important to me was the compass orientation of the downward-pointing camera, as this would determine more than anything else how to project the image onto the ground. I spent considerable time trying to work out how the stored orientation parameters (yaw, pitch, roll) could be converted to the desired angle of the camera frame relative to north. But before I solved that mathematical problem, I realized I’d better look at how good those values even were. I took the camera to the roof of my building, put it on a tripod, and took a succession of images with the camera pointing in known compass and elevation directions.
Most telling were the results when the camera was level and pointed at the horizon. The recorded yaw angle was consistently off by roughly 30 degrees from the true magnetic compass direction. Clearly the internal magnetometer seemed not to be well calibrated. Unlike the case for our drone’s magnetometer, there was no obvious way for us to calibrate it ourselves. [Aside: As I write this, it occurs to me that the magnetic field on our rooftop might conceivably be affected by certain steel structures, so consider the above finding preliminary until I have a chance to try again in a wide-open area.]
In the end, I ditched all of the camera images’ stored metadata and relied exclusively on the autopilot logs for everything, substituting the drone’s yaw, pitch, and roll for those of the camera. It must be mentioned that the Duo Pro R comes equipped with a connector to the MAVlink output port on our Cube autopilot, so I can theoretically spare myself the future trouble by allowing the camera to tag the images directly with the drone’s data. Figuring out exactly how that’s done—and which parameters are even stored—is a project for another day.
Problem 6: Radiometric quality
The converted thermal image files generated by Pix4Dmapper are stored as floating-point (“F-mode”) TIFF images. As of this writing, I have convinced myself that the values in these converted images represent degrees Celsius. The original radiometric data, on the other hand, are stored as 14-bit unsigned integers representing sensor counts, which should in turn, I think, be proportional to raw radiometric intensity integrated over the 7.5 – 13.5 μm infrared band (see Planck’s law).
It appears that Pix4Dmapper uses some assumptions about emissivity, atmospheric contamination, etc.—probably values stored somewhere in the original RJPEG files that I can’t open—to produce an estimate of the physical temperature of the scene. Not great if you want the ability derive and apply your own assumptions about these parameters, but not a show-stopper. If you can find out what the original assumptions were, you can undo them.
Now that I could finally view the thermal images, here are the issues I immediately observed:
I opened a random image captured looking straight down at the nearly bare ground of the recently harvested corn field, and here’s what it looked like:
Yikes! Look at that vignetting! The temperature field ranges from a maximum of around –7 degrees C at the center to –18 C in the lower right corner—an 11 C difference!
It was sunrise (and therefore no uneven solar heating) so I knew that this wasn’t a real temperature variation, especially since the same general behavior showed up in most of the images. I considered whether the land surface itself might have an emissivity that strongly varied with viewing angle—high at nadir, lower at oblique angles. Alternatively, maybe the edges of the port in the belly of the drone were encroaching into the camera’s field of view.
To test whether the camera itself was a likely contributor, I took the Duo Pro R (our original, recently returned, not the loaner we used in the field) into a small windowless room and pointed it at a painted cinder-block wall that I knew must be more or less isothermal. Moreover, since I was essentially inside a blackbody cavity, I knew the radiation field—reflected plus emitted—should be more or less uniform and equal to the Planck value even if there were angular variations in local emissivity. Here’s what I got:
Interesting. There’s still vignetting, but this time only in the lateral direction (again, note that this is a different camera). The dropoff from center to edge is more modest—about 2.5 C, but still significant. Note, by the way, that the actual temperature of the room was close to 21 C, whereas the recorded temperature at the center of the image is around 24.5 C. I attribute this difference to the likely assumption an emissivity of, say, 0.95 in the conversion from the raw data to the floating point format. In this context—an enclosed nearly isothermal room—the appropriate value to assume would actually be 1.0. But that difference still wouldn’t explain the vignetting.
Regardless of the cause, I somewhat optimistically expect vignetting to be a systematic and reproducible artifact for any given camera. A simple way to correct it is, therefore, to average a large number of images of reasonably featureless scenes so that only the systematic spatial bias remains, then subtract the inferred bias from the individual images. (This is of course completely effective only if the magnitude of the effect is both scene- and time-independent, which remains to be determined.)
I utilized the above method to correct the vignetting in all of the images from a given flight and found that, while it worked well overall, this method seemed to slightly overcorrect some of images while slightly undercorrecting others. So maybe the vignetting isn’t completely time- or scene-independent after all. Which leads me to the next issue:
6b: Calibration drift
The biggest problem of all seems to be that image-wide biases are both large and time-dependent. The easiest way to see this is to consider the final thermal map that I generated from the October 16 sunrise flight:
The drone began its survey flight in the upper-right corner of the study area, traversed westward across the area, reversed course and repeated the pattern, progressing southward with each new leg until the entire area had been covered. For reference, the dimensions of the figure axes correspond to an area of 900 x 600 meters (half meter per grid point). The entire flight covered by the above image took only about 25 minutes.
Notice how the first leg at the top recorded ground temperatures in excess of 10 C! I was in the field on this frosty morning, before the sun was even clearly visible above the horizon, and I guarantee that 10 C ground temperatures were not to be found within a 100 mile radius.
In theory, this could come back to an incorrect internal assumption about emissivity. However, because of the non-linear relationship between the Planck function and temperature at these wavelengths, it would have to be a fairly massive emissivity error—around 20%— to explain such a large temperature bias.
Far more importantly: we clearly see that with each successive leg over a similar surface type (uncut dried soybeans on the right, harvested corn field on the left) the average apparent temperature of the land surface dropped systematically until it reached temperatures well below freezing, which were closer to what I expected. Note that the first image displayed above under “6a: Vignetting” was taken in the final pass at the bottom of mosaic. In the central region of the image, it recorded values near –7 C. Possibly a little too cold, but still plausible on this particular morning.
Putting on my meteorologist hat, I am again confident that the surface temperature in this field did not uniformly drop by approximately 17 C in less than 25 minutes, right at sunrise. If there is any real question about that, I will get the local temperature measurements that were made at five-minute intervals throughout the field campaign by the other participating researchers.
My tentative conclusion is that camera recorded abnormally high scene temperatures until the camera itself equilibrated with the cold air it was being flown through (it had been in a warm automobile about a half hour before launch). While it’s understandable that it might be technically challenging to control biases related to evolving temperature gradients inside the camera, it must be equally well understood that controlling such effects is essential if the camera is to be relied on for true radiometric work. Perhaps the camera should be allowed to equilibrate for at least an hour with the outdoor environment before beginning a mission. Further testing would shed light on this hypothesis.
At this point, it’s not clear to me whether I can reliably remove the apparent time-dependent biases from the images I have already collected, since I have no stable reference to compare them with. If not, this will have been an expensive learning experience and a possibly pointless four-day trip to another state.
In the future, I think it will be necessary to place known thermal calibration targets at regular intervals along the drone’s flight path, a requirement that will greatly increase the logistical complexity of conducting a thermal survey. How such calibration targets could be constructed is unclear to me for now—maybe 2-meter diameter pans filled with icewater slush? Thermostatically-controlled IR-absorbing pads?
Before actually working with the images from this camera, my primary concern had been with the likelihood of pixel-level noise. Given the uncooled bolometer-based sensor, I was mentally prepared do some averaging in time (e.g., from overalapping images) and/or space to beat down the noise to acceptable levels.
In fact, the images appeared free of serious noise. You can see some noise (+/- a few tenths of a degree) in the cross-sections shown in section 6a, but it’s certainly not serious relative to the other artifacts.
Comparison to published specs
FLIR publishes only partial specifications for the Duo Pro R camera. For example, there is no mention of the visible camera’s focal length, shutter speed, aperture, effective ISO value, or automatic exposure control mechanism or range. The thermal spectral band is given as 7.5 – 13.5 μm, but there is no spectral response curve showing how sharp the cutoffs are or how flat the response is between the stated limits (these are important for accurate calculation of atmospheric corrections).
Regarding the Duo Pro R’s radiometric performance, only the following information is given:
+/- 5 C or 5% of readings in the -25°C to +135°C range
+/- 20 C or 20% of readings in the -40°C to +550°C range
I’m not sure what “or 5% of readings” means in this context. Does it mean that up to 5% of pixels in any given image might fall outside the accuracy range +/- 5 C?
Regardless, we saw above that, at least under some circumstances, entire images could appear to be biased high by around 10 C, and we saw apparent vignetting with corner temperatures as much as 11 C colder than the scene center (though this needs to be explored more carefully to completely rule out other causes, as discussed).
I was excited when I first learned about the FLIR Duo Pro R camera. Based on the manufacturer description and published specifications, it seemed tailor-made for our research applications. Given the $7,599 price tag and the FLIR reputation, I took the description “radiometric” at face value. I was therefore deeply disappointed by numerous technical obstacles and concerns, including the following:
- Minor(?): Questions about the reliability of the PWM triggering mechanism (requires further investigation).
- Minor: We had intermittent problems with our camera on power-up, but it has been repaired, and we promptly received a loaner to use in the interim.
- Major: The apparent impossibility of accessing thermal data in FLIR’s proprietary RJPEG files using common open-source image processing tools, compounded by FLIR’s apparent resistance to helping customers utilize the data in ways other than those anticipated by their own software tools (e.g. app development; commercial report generation). In the future, I will work only with separately saved RGB JPEGs and 14-bit thermal TIFFs from this camera.
- Major: The surprising lack of robust software support for thermal mapping using the Duo Pro R, including from Pix4D, an industry leader in drone mapping software. While Pix4D had apparently obtained from FLIR the privilege of reading the proprietary image format, even Pix4D tech support was unable to generate a usable orthomosaic from our data set. In the end, it was necessary for me to write my own software to obtain a usable approximation (see Section 6b), starting with the converted TIFF files saved by Pix4Dmapper.
- Medium (but possibly temporary): Pix4D claimed not to have reliable technical information about the Duo Pro R sensors in the camera data base used by the Pix4Dmapper software. Meanwhile, FLIR tech support informed me today (December 4, 2018) that EXIF data (e.g., focal length) stored with the images from the Duo Pro R may not be correct and that this would be fixed in a future firmware update. They did not yet, however, respond to my request for the detailed and accurate sensor specifications required by Pix4Dmapper. [UPDATE Feb. 26, 2019: Three months later, I still haven’t received the requested technical information, even though I still get sporadic automated email messages that my support request has been ‘escalated’.]
- Minor: The surprisingly low apparent quality of geolocation and orientation metadata despite the camera’s onboard accelerometer, gyro, magnetometer, barometer, and GPS. In the future, I will trust only data provided by the drone autopilot.
- Medium (because potentially correctable): A prominent lack of angular flatness in the thermal camera’s radiometric response, as evidenced by significant vignetting —with up to 11 degrees Celsius observed difference between image corners and center in some cases. Further investigation is required.
- MAJOR: Large apparent drifts in overall radiometric calibration, as revealed by a >10 C drop in average scene temperature over the span of a 25-minute drone flight viewing what I believe to have been a more or less constant temperature target. This might be the single issue that permanently prevents us from realizing our goal of obtaining reasonably calibrated thermal map of the SAVANT field site—the reason we purchased the FLIR Duo Pro R and drone in the first place.
It may not be technically possible to do a whole lot better than this camera—in a compact, drone-friendly package, no less—without resorting to some kind of thermal isolation and temperature control. Nevertheless, if the Duo Pro R camera is in fact to be used for true IR radiometry, it will apparently be essential to place some kind of known calibration targets at regular points along the drone’s survey path, with possible additional attention to allowing the camera to equilibrate with its environment before flight. It will also be necessary to carefully quantify and remove the aforementioned vignetting effect.
If anyone’s else’ experience contradicts my own, please let me know or post a comment below.