Wednesday, January 23, 2013

The Tyranny of GIS – Part One

This is a reprint from an earlier blog (May 30, 2012) that is no longer available online.

As a professional in this industry for over twenty-five years, it’s difficult not to notice common threads. One of those is the cost of doing GIS. I run into many smaller organizations, such as utilities or municipalities where the cost of GIS is burdensome or a complete barrier to the technology. These organizations often miss out on the efficiencies that GIS can provide them (the fact that even many enterprises with very extensive systems miss out as well is a topic for another time). In many cases, these organizations don’t have the personnel knowledgeable to make effective decisions about the systems and so become hostage to consultants who have interests outside of their own (as a matter of fact, I’ve heard the word “hostage” used many times in reference to GIS). Now as one of those consultants, I’m not self-loathing here, but pointing out that in all cases, a consultant has a mission of increasing their sale of services to be successful – this is different than the needs of his customers. “Good” consultants will align their mission with the customer’s – selling the customer the services they need. In many cases, though, the consultant may not fully realize the needs of the customer, and the customer lacks the technical understanding to fully express their needs.
I noticed this particular effect in a very simple transaction recently – getting my hair cut. I’ve noticed a trend lately. Many of the new stylists (I have to not show my age and call them barbers) ask detailed questions about the methods – cutting with scissors or clippers, or the specific length at the top or sides. Quite honestly, I have no idea how to answer these questions. I want it to look a certain way, but I don’t know how to get there. That’s why I get my hair professionally cut (aside from the fact that trying to cut one’s own hair is challenging). Does this sound familiar? I’ve seen this interchange with service providers from all areas of business, and quite often in the GIS world.
This breakdown of communication is purely the result of different frames of references. The result is that there are several areas of the process that are not effectively “sized’ for the organization. Areas like data precision, system design, system architecture, training, data sharing are specific areas of an implementation (or operation) plan that can get a bit out of control and drive the cost of the GIS up. Getting a handle on these can often bring the cost of these way down. I’ll tackle several of these over my next several entries.
Precision –The first area I’ll cover is data precision. This raises the obligatory issue of precision vs quality. Precision is the standard of measure we use, such as survey grade GPS, consumer GPS, steel tape, measuring wheels or “calibrated eye” (I once saw a survey while on the east coast that used a distance of a cigarette – I assume he meant the time to smoke it – I wonder if it was a regular or long?). Quality is how well we did the measurement (was my tape level, did I read the angle correctly, etc). For the purposes of this discussion, we’re talking about precision, and I may use the word quality, but I mean precision.
The actual GIS data is probably the greatest expense for most GIS programs. Of course, that makes sense, when you consider that data really is the GIS. While there is the old adage that GIS is software, hardware, processes, people and data, the reality is that all of the other components are interchangeable while the data is the reason we do GIS. Because of this, it’s easy to place an undo importance on the precision (quality standard) of the data. Often this is because the actual business purpose of the specific GIS data is poorly developed or understood. It is the actual purpose of the data which should define the precision standard of the data, but in many cases, we start data capture without clearly defining the need, and so data capture is at a higher standard than necessary or prudent. When speaking of data creation, the cost generally rises with precision (often at an increased rate).
I can hear some folks cringe at the suggestion. After all, the standard engineering definition for GIS is “Get It Surveyed” to identify their inability to rely on GIS for design. I’m not sure that’s a bad thing. As a designer, I consider it due diligence to survey the site for every project, as things change and the records may not show it. Relying on existing records is a way to guarantee delays, change requests and other problems with a project. Knowing that, then “basis for design” may not be a valid business requirement for GIS data. Generating, for example, parcel data that is suitable for design can be quite expensive and completely justifiable for a construction project (as a percentage this would be a small part of the overall cost of most projects). Generating data of similar quality for an entire municipality where a very small portion may ever be involved in new construction can be prohibitive and become a major impediment to developing GIS capability. Using a lower quality data set may be perfectly effective for the planned use of the data. In the parcel example, maintaining a link to the actual record of survey may provide satisfactory access to higher precision data.
I’m not suggesting that data always be at the lowest possible quality, but I am suggesting that prior to determining the need for accuracy, that a reasonable expectation on the data and it’s uses (immediate and long term) be evaluated. In many cases, having a highly precise base map provides benefits that are perfectly justified for the organization. I know of some utilities that use highly precise data to model their systems and feel the cost of generation was completely justified. In many new systems, using highly precise as-built data from recent designs would be more cost effective than trying to generate a lesser quality dataset. I know of other utilities that have used a commercial street centerline data set and rubbersheeted old scanned system maps to fit. In usage, I’ve seen a range of precision levels used quite effectively.
The key is understanding the needs and understanding your data. The results from modeling a system based on lower quality data will give lower quality results. That isn’t bad, it just means that there is a greater margin of error. That may be just fine for the particular application.
In the next post, I’ll address some system design issues.

No comments: