Home Blogs GNSS, IMU and Imaging

GNSS, IMU and Imaging

16 Minutes Read

GNSS rover development continues to take evolutionary leaps in sensor integration. Leica Geosystems has integrated image point capabilities in their GS18 I survey rover and have done a grand job of it.

Iโ€™ve been looking for a more technical way of saying this, but here goes: the new GS18 I is very cool. I got a hold of one for a test drive, and the results were impressive. While not the first implementation of image point capabilities on a GNSS rover, this tight integration of multi-constellation GNSS, no-calibration tilt, and image-based positioning is significant in that the workflow is very well refined. While I, nor this publication would endorse any specific products, the impetus for this test drive was to highlight a significant development in integrated sensor tech. Plus, Iโ€™m a surveyor just itching to try new things.

Image points are picked in two or more images from groups captured automatically by the built-in rover camera, in the field or later in the office.

Iโ€™ll cut to the chase: I pushed it quite hard under many scenarios, and while according to its data sheet it is designed only for short ranges (e.g. 2m (6โ€™) to 10m (30โ€™), in my tests it consistently yielded offset positions (absolute) under 30mm (0.1โ€™). Relative precisions, of course, were better still. The sweet spot seems to be 2m (6โ€™) to 8m (16โ€™) away from the subject. And yes, I checked these points and inverses with a total station.

Iโ€™m about to get a bit long-winded on this subject. With every step in the evolution of this, and other new field data collection instruments, results can seem like magic compared to gear weโ€™ve used in the past. We need to be skeptical at first, but even if we get great results that are impressive, questions linger as to just what is going on inside these magic boxes. Instruments like the GS18 represent a confluence of multiple innovations in sensors and solutions. It can take a leap of faith to accept results, often running counter to decades of experience with earlier iterations of related technologies and rules-of-thumb developed for their use. Understanding more about how this multi-sensor system arrives at the end results, should help with skepticism, and can help develop good field practices to get the most out of it.

The kit tested: The Leica GS18 I and CS35 (Win 10) controller/tablet (it also can be run on a CS20 controller).

Following the Footsteps

How does it do this? Let us back up a bit and look at the evolution of the idea of putting a camera on a rover to derive offset points. The first question is why? One of the most obvious drivers for doing this is to overcome one of the Achilles heels of GNSS rovers, namely that they can have difficulty (or even cannot in some cases) collectingย points in sky-view challenged locations. For example, in dense canopy, or under overhangs. Multi-constellation GNSS alone has improved this to some degree, like for performance under canopy. And tilt solutions can work well (e.g. tilting the pole out towards the open sky). Legacy solutions for limited sky view, could involve taking reference shots out in the open, and then taping the offsets from thoseโ€”or pulling out the total station. Now, image point solutions use various photogrammetric methods for those offsets. In this implementation it derives offset points from automatically collected and registered image groups.

There are several implementations of cameras on rovers already out there, and more of that has been tried in the past. For instance, some Javad rovers can use their built-in cameras for this purpose, with the user being able to pick points from multiple images, and this works well for certain situations. Likewise, the Insight V1 from South Instruments puts a camera pod, on the pole, under the rover head. Iโ€™ve tried several of these types of solutions, and found them capable, though they are not always processed onboard. For instance, the Insight V1 sends images to the cloud for processing, and some others have required office processing after the fact. One of the most ambitious implementations of on-the-pole imaging was the Trimble V10. It fit under the rover with multiple cameras to capture wide panoramic image sets that could be processed in the office, to choose points or create point clouds.

Workflows on any I have tried thus far yield offset points well enough, but the workflows can be somewhat cumbersome. For instance, there were time consuming steps, like needing to keep the pole still, or use legacy (magnetic oriented) tilt functions, for a good GNSS reference position. Precisions could be difficult to keep consistent. And, I have found that most of these solutions never really caught on for day-to-day use as much as originally hoped.

Also Read: Advancing GEOBIM for highways/horizontal construction

GS18 I incorporates several recent key technological developments to provide a simple workflow, that will not burden a user with too many steps to derive image offset pointsโ€”right there in the field. Rumored for some time, once it was officially announced, I asked Bernhard Richter, VP Geomatics at Leica Geosystems (part of Hexagon) about the development of the solution, and what to expect. He said that he had been pursuing and working towards tight sensor integrations like this for nearly a decade. And that the previous model, the GS18T (the first major implementation of no-calibration tilt) was an incremental step along the way. Indeed, no-calibration tilt is key in how this system works, and to its simple workflow (Iโ€™ll touch on that in more detail later). Richter said I should remember photogrammetry lectures, and by keeping those fundamentals in mind while operating the unit, I would not be disappointed.

The Image Points Workflow

For the test drive, I took the unit to the roof of our building, where I have set up a (socially distanced) test course. Iโ€™ve got control points set up there that Iโ€™d sunk hours of GNSS observations into post-processing to establish tight positions. From these control points, I used a total station to shoot points on a small structure on the roof, picking sharply defined points that I could shoot with the image points feature for comparison.

Picking image points on the controller after an image group capture. The extra screen space on this controller worked well for this step, but it can also be operated on a more traditional sized collector

First, I used the GS18 I as a simple RTK rover, connecting via cellular to a permanent base nearby (about a kilometer away). This was to determine how much of the error budget for absolute positions could be from the roverโ€™s RTK solution. I also did some short static sessions to post process as a check. Some of the control points are in wide open sky, but others I have set in spots of varying sky-view and multi-path conditions.

Paired with the rover tested, was a CS35 field controller, essentially a Panasonic Toughpad tablet. This runs the Leica Captivate field software in Windows 10. I tend to prefer field large screen controllers, but it can also run on the smaller CS20 controller, that is more of traditional hand-held controller with a tactile keypad. The large screen was especially handy for the image point workflow.

The workflow begins with standard RTK rover steps. Connect the GS18 I to the controller (via WiFi). Then, connect your controller to the web. In this case, the CS35 tested had a built-in cellular modem. Next, connect to a source for corrections: ย radio or IP to a base, or connect to a real-time network via NTRIP. Or use a global corrections service (e.g. PPP via L-band satellites, like SmartLink (I did not test a PPP option this time). I used NTRIP to connect the base nearby that outputs RTCM3.2-MSM (5), to use all constellations. Once connected and good RTK results were verified on control points, I checked some tilt results.

The GS18T and GS18 I have a tilt indicator on the rover head, and in the controller software that shows green if the tilt is calibrated, red if not. Self-calibration of the tilt only takes a small amount of movement, just tilting the pole a foot or so, and the indicator turns โ€œgreen to goโ€. Then it was on to capturing image groups. You size up the area, or features you want to capture, like the front of the roof structure I mentioned, and mentally plan paths to walk the rover past it to create image groups. I started the image capture feature with single click and start walking the paths keeping the camera facing towards the subject. It was not necessary to overthink this, as you can take many image groups, and pick image points from any combination thereof.

The key best practice is to get multiple images to overlap as you walk by. It collects images every half second, for up to 60 seconds per group. You should take your time to make sure you can see subject points from as many images as possible. So, I walked a few steps before becoming perpendicular to the start of the structure and went past the other end a few steps. With some experimentation I found that walking around the lead and trailing edges of the structure, in an arc, provided better geometry.

By looking at the live display of images as you go, you can get a feel for how the image group is progressing and how much to rotate the pole as you arc around something. I also noticed that due to the limited camera view when close to the structure, it helped to tilt the pole forward and back as I went to capture images of the lower and higher sections of the structure. You start the capture, walk for up to 60 seconds, stop and store. As you are in motion, the tilt requires motion to stay in calibration (more about that later). The movement of walking the pole does this, and this element worked just fine. ย It is not necessary to try to hold the pole upright, as a few degrees of tilt yield a negligible difference than if plumb. Just use slow and steady motions to capture images with good overlap.

Also Read: Scanning Bears: Who is the fattest of โ€™em all?

Once you have one or more image groups stored, you can open them in Captivate and begin picking image points, with whatever numbers and feature codes you wish, it treats these as it would any other observed points. You view the points in a sort of carousel gallery, scrolling through the same order as the path you walked.ย  You pick first in one image, and you see a rough precision. For instance it might say 40mm (0.13โ€™), but then you need to pick the same point in successive images. As you do, the displayed precision will improve, and especially if you pick from geometrically advantageous images. You might see it drop to by half or even more. Corners of windows, edges of structural elements, edges of slabs, nails or screws, edges of stop bars and road striping; these are examples. The precision displayed in the image point feature is not necessarily the same as your final yielded absolute precision but was surprisingly close when compared later to the total stations shots. At any rate, it is a good tool to let you know roughly how much you might be improving the precision of a point as you pick it in subsequent images.

There is a corner pick tool in the image point screen, and you can zoom in (by several methods) to choose down to the pixel level if need be.ย  But do not expect to do as well at long ranges. Picking points can be a challenge if there arenโ€™t sharply defined or contrasted features.

Image points can also be selected by the same process, from jobs exported from Captivate, then imported into the Infinity office software. The magnifier tool in infinity was a plus

You can export the job files, which include the image groups from Captivate and then import them into Infinity, a Leica office software, and continue to pick image points by the same process. There were certain advantages in doing this, like having the magnifier tool, and it can be a bit easier to select points in the office as opposed to with the stylus and tablet in the field.ย 

ย And you can process the image groups, individually or merged, into point clouds. Infinity already has tools for terrestrial photogrammetry and point selection, and it can process images from other terrestrial sources, UAS, and combinations thereof.

A point cloud from merged image groups processed in the Infinity office software

The Imaging Component

When you look at the images, they may look low resolution. You may have to get used to the limitations on how far you can zoom in and plan accordingly. The camera is 1.2MP. Why didnโ€™t they just stick a much higher resolution camera on the unit? From what Iโ€™ve been able to gather, there were a lot of engineering considerations and dependencies that would have made the workflow impractical with a higher res camera. There is so much going on in the background as you gather and process images; there must have been tradeoffs. And in addition, a global shutter was essential, as compared to say a rolling shutter (that can subject the images to distortion). Take a few test groups and look at the features in the images and get a feel for range vs. feature definition. I set a tape measure on the ground and walked image groups parallel to the structure at set distances. Then I looked to see how well I could pick points. This was how I determined a sweet spot range of 2m (6โ€™) to 8m (16โ€™) for the points I wanted to shoot. I was able to get some points out at 15m (30โ€™) but beyond that it became difficult.

Some incidental good news: it all works well in the rain. It was pouring down almost the entire time I had the unit to test (yes, fall in Seattle), but it didnโ€™t seem to affect anything. It did lead me to adjust my workflow by shooting image groups, then standing under the building overhang to pick points. I can picture shooting image groups then heading to the truck before leaving a site to check how good the points are.ย  In this test, in the range I described, the majority of the points checked against the prior total station shots came in under 30mm (0.1โ€™). Some though were around 40mm (0.13โ€™) though that had more to do with how difficult they were to pick as they were not high enough contrast or otherwise difficult to see from multiple images.

Another thing to keep in mind, in the way you capture image groups, is that there is another essential process going on in the background that is not apparent to the user. This is the key to registering the images, it does this on the fly, and makes this quite unique among such solutions thus far. It taps another Leica innovation: VIS (visual inertial system) that has been implemented on, for instance, the RTC360 scanner. It is a way to progressively (by cameras) track common image points as you move the instrument and help automatically register captured images or scans. You do not pick these points and cannot see them; the system is doing this in the background as you walk the image groups. When you start collecting an image group, the instrument is looking for dozens of well-defined patterns of pixels in each image, and the same in subsequent images. In the case of the RTC360 scanner, it uses these transitory common points to assist in-the-field scan registrations. The GS18 I utilize elements of a VIS-style solution to register the images to each other, and together with the GNSS/inertial system of the rover to register/orient these in space. There have been other solutions in the past that did not do something like this, that may have required additional registration stepsโ€”often with poor results.

No Calibration Tilt

For the imaging system to work as it does, it depends on the same inertial/GNSS system that provides tilt compensation. Letโ€™s look a little deeper into the tilt component. Why is there an emphasis on the term โ€œno calibrationโ€ tilt? That is because legacy tilt compensation, that has been around for nearly a decade, relied on magnetic orientation. The tilt part is relatively easy, tilt sensors can be quite accurate, but in what direction is the tilt? On legacy tilt solutions, often laborious calibration routines needed to be done (or should have been done more often than folks did) to map out the magnetic state in the vicinity of the site.

The problem with magnetic is that it can vary quite a bit, unless you are working in a small area, and it is subject to magnetic disturbances. While such systems proved to be useful for certain tasks, many users found the workflows to be cumbersome and subject to inconsistencies.ย  With a tightly integrated GNSS and IMU, and you get precise orientation without calibration steps. It is true in a general sense that IMU, standalone, are subject to drift. But integrated with GNSS, it is constantly updating at a high rate. Such integrations have been standard in mobile, airborne, and marine mapping systems for many yearsโ€”the challenge was miniaturization for a rover.

Shortly after their first no-calibration tilt rover, the GS18T was announced in 2017, I took one for a test drive and drove it hard. I found the solution hard to break. I tried rapid movement, spinning the pole, and tilts up to 45 degrees. For moderate tilt, I found the difference between plumb and tilted to be negligible and found mostly less-than a cm of difference when tilted further. I had to hold it very still, for 10 seconds or more to see the red indicator, but only had to move it slightly to go back to green. After decades of being taught to hold a rover as still as possible, this notion of precision through motion takes a bit of getting used to.

I repeated these same tilt tests with the new rover, taping a small digital level to the pole to see how much I was tilting. I found a particular benefit of such new systems in the way they streamline stakeout. The legacy method involves shooting a point, seeing how much you need to move, then shot again, move, shoot again, etc. until you are sitting on the intended stakeout location. Now, with new no-calibration rovers, you can now get in close and simply move the tip of the rod around until it hits the point.

Separate test of the multi-constellation and tilt capabilities of the GS18T and GS18 I rovers revealed solid performance in mixed environments and various degrees of tilt

There is a lot of hand wringing over the assertion by some that such tilt solutions can match, or even be an improvement over traditional bubbles. It can be a leap of faith to overcome notions of the primacy of bubble vials, as these have been the standard for so long. But as I test this rover, and others like the Trimble R12i, and Tersus Oscar, results have been hard to dispute. No-calibration tilt compensation is being implemented on many new systems, and as these steadily improve, and we find deeper levels of comfort in their capabilities, we could see dramatic changes, and benefits, in the way we work.

Another note about the GS18T and GS18 I: the antenna was designed to better capture signals when tilted. At the 2018 HxGN conference and exhibition, over was on display with the housing removed to reveal this purpose designed antenna, and Richter explained the design features.

Multiple Constellations

The key expected benefit from the integration of multiple constellations, now that the two newest ones are reaching full complement, is the ability to work in more places that were it had been difficult or impractical with only one or two. The number of satellites in view has nearly quadrupled from the early days of GPS-only. GPS, Glonass, Galileo, and Beidou; the latter two reaching full complement with 3 (or more signals), and the former two modernizing to add 3rd signals. This changes everything. Does this also improve precision? There is growing evidence that this may be the case, especially with modernized 3rd signals, the differences might be subtle, but things can only get better as rovers improve the ways they can mix-and-match satellites and signals.

To update the integrated IMU, the GNSS component is continually updating at a high rate. The solution is not frozen to a specific epoch of correction, which are often broadcast at 1Hz. There is no need to overload say a base radio with high rates of say 10Hz or 20Hz (though that is an option). A broadcast correction does not deteriorate rapidly between epochs. Values can be predicted, or extrapolated so to speak, between epochs, and the precision is maintained. The integrated solution is updated through motion, and the trajectory of the rover head through space. This runs counter to long held conventional wisdom, borne of decades of post-processed, RTK, and network RTK solutions where the solutions only worked from being plumb and motionless.ย  Iโ€™m not saying that we have to cast aside all best practices, but these developments will need us to develop certain new ones.

GNSS performance of this unit, especially with multi-constellation is itself quite impressive, as with several of the other newer systems out on the market. This includes great performance in sky view challenged and high multipath environments. For example, there is a favorite โ€œmultipath hellโ€ location I have tested many rovers in over the yearsโ€”under a transmission tower. The GS18T/I is one of only four rovers Iโ€™ve tested there that fixed, and checked into a conventionally derived position precisely. Iโ€™m not suggesting doing this as a standard practice, but it makes for a compelling set of results.

A footnote about these types of tests that I do; I try to do them cold turkey with just the manual to fall back on. I have used Leica systems in the past, like the 1200 and Viva systems, but had not used Captivate or Infinity much. With little bit of verbal advice on the image workflow, I was able to pick it up pretty quickly (which was great on such rainy days). ย Iโ€™d say that a one could be up and running on this new system and new feature in a day. This is a testament to not only Leica, but other vendors that have been expanding the capabilities of their systems and software, while making the workflows simpler, yet with great QA/QC steps built in. Drive responsibly though, do not expect to just push buttons. You still need to be very deliberate in your field and office practices to get the most out of these magic boxes.ย ย 

A Sensor Integration Future

Who is to say what might be next, for this line of rovers and others? Could lasers and limited scanning be added to improve solutions and accommodate the capture of even more data? Could automated feature recognition via AI be next? Speculation on the future aside, as far as I am concerned, the integration of GNSS, IMU, and cameras on rovers is now viable, and a prime example of how part of that future is already here.ย  ย ย 

Few Updates

Since this blog was posted, Iโ€™ve had a bit more time to look at the data. I have also exchanged emails with Bernhard Richter, VP Geomatics at Leica Geosystems (part of Hexagon) on a few new technical points and some clarifications:

  1. Image points selection in multiple images. There is an auto matching algorithm so that (in particular when snapping is on) the matching objects (or pixel patches/patterns) are found in other images automatically. So, the workflow for an image point can work from picking in only one image. During my tests, I often picked in more than one manually to see the progressively improved precision.
  2. Speed of image group capture. I cautioned to walk slowly. This is not necessary as the development team tested at speeds up to 30km/hr. You could mount the pole on a bracket on your survey vehicle and capture while driving. I cautioned slow and deliberate capture to make sure you have good overlap, and to help develop your capture practices. But once youโ€™ve gotten used to the flow, and tried it out at different speeds, you can determine best practices for different needs. I had wanted to try this out on an all-terrain vehicle (ATV) but did not get a chance this time.
  3. While I emphasized absolute precision of points in theses tests, I have since had more time to look the relative precisions. By comparing the points shot on the structure to those I took with a total station, points and inverses are fitting to about 3mm (0.1โ€™), with outliers on points that were difficult to see. Goodbye measuring tape.
  4. Camera shutters. There is a lot of progress in that area, so that it may be possible for higher resolution cameras to be integrated into such solutions. Richter emailed me this clarification: โ€œThere are technologies so that feature tracking will be nicely possible in future with rolling shutters. The principle is that lines of the rolling shutter (with the corresponding time and position) of the image are read out line by line, the image itself is dissolved in lines and everything is handled mathematically.โ€ I would imagine that during this specific development cycle, that such CPU hogging technologies were not available (at the time). It would probably be safe to expect that it may not be too many years before we see 5mp or even 10mp cameras in such integrated systems.
Also Read: How to utilise BIM & VR Technology to improve client experience