Using MS Access as the query interface to Oracle Spatial
Posted on March 13, 2013 by Adam Lodge
Like many people, I cut my teeth in the database world using MS Access. And, early in my career, I was much more comfortable writing queries using Access’s sql GUI than I was writing sql in a command prompt. As such, it was common practice for me to make ODBC links to my Oracle Spatial database, and use Access as the interface to do the actual querying.
In the case of Oracle Spatial though, there was one problem - when I tried to link to a table that had a column of sdo_geometry data type, I got this error:
Invalid field definition ‘GEOMETRY’ in definition of index or relationship
The solution?… you’re gonna laugh. Make a view in Oracle that excludes the geometry column and connect to that instead of the base table.
Remodeling GIS Parcel Data for Ease of Use
Posted on March 15, 2012 by Adam Lodge
In GIS parcel layers, I find that users tend to assume that APN (assessor parcel number) represents a unique key for each parcel geometry in a County. Nine times out of ten, this assumption is wrong, and results in an inaccurate analysis. Here is why...
APN's represent property tax accounts (a bit like your local gas and electric utility account number), and parcel geometries represent a piece of geography that is taxed by the County. These are two slightly different things. It is possible, for example, to have a single tax account for two discontinuous (non-touching) pieces of geography. In a GIS parcel database, that will usually mean two separate geometry records that link to a single tax account record in the Assessor system.
This matters because users of GIS data often want to query and/or join data to parcel geometry based on APN. What if you want to know the total assessed value for all parcels within user-defined polygon? If two of those parcel geometries link to the same APN, then the assessed value for that one APN will be counted twice. Not good.
The best solution is to create a parcel layer in which APN is a unique key. This can be accomplished by combining GIS parcel records into multipolygons where there is a shared APN. It is ideal if you are running a spatial DBMS that is capable of constructing a view that does this on the fly (such as Oracle Spatial, PostGIS, or SQL Server Spatial.)
I recently wrote some SQL in PostGIS that did exactly this for the City of Walnut Creek in support of their Enterprise Addressing System. Below is the code:
This is a nice way to transform parcel data so it's a little bit more friendly for end-users.
If anybody is interested I will gladly post how to accomplish the same data transformation using Oracle Spatial, SQL Server Spatial, FME, or ArcGIS.
eCatch 2.0 - rebuilding sustainable fisheries on the California coast using GIS
Posted on December 16, 2011 by Joe Metro
If you follow environmental news then you probably already read about eCatch, a new iPad app developed by The Nature Conservancy that gives fishermen real-time access to the latest information on where the fish are – the abundant ones they want to catch and the overfished ones they need to avoid.
The basic idea behind eCatch is that some fish are becoming endangered through overfishing, and the best way to mitigate this is to give fisherman as an industry, the tools to manage their fishery sustainably themselves.
Using eCatch, fisherman at sea, report areas where they caught overfished species and learn from other fisherman where the more abundant species are. By collaborating and sharing information in real-time, fisherman can prevent rapid declines in fish populations and rebuild robust fishing industries and communities along California’s Central Coast.
eCatch 1.0 is a web-based application that allows for:
- Visualizing the locations of catches for a variety of fishing vessels
- Developing reports that summarize the species and catch from each vessel, and as aggregates for a group of vessels
- Monitoring the progress towards catch limits
- Tracking the capture of depleted species
The results are pretty impressive so far so The Nature Conservancy has engaged Farallon Geographics to help them develop eCatch 2.0.
eCatch 2.0 will introduce significant enhancements to the UI and mapping technology to make accessing and interacting with fisheries data both more dynamic and more intuitive. It will also expand the application so it is useful to managers and administrators as well as fisherman.
Users will be able to query and visualize catch histories in order to predict trends as well as define and view reports and histories on species limits, location and amount of overfished species taken for a given time period for individual vessels and aggregated by associations.
- Fishermen will be able to see on a Google basemap of the central California coast, locations where overfished species have been caught from their vessel, as well as any other vessel.
- Fishermen will be able to display any portion of their fishing history on the map by indicating the time span and species caught.
- Managers and Administrators will be able to link to a form that allows for editing logbook data simply by clicking on a set point.
- The application will alert users when an overfished species catch event happens as soon as the data are entered into the system
Farallon is working with the Nature Conservancy on a set of web services for geospatial data access and reporting using Microsoft .NET web framework along with PostGIS/PostgreSQL, ESRI SDE, ESRI ArcGIS Server, ExtJS, and OpenLayers.
eCatch 2.0 is expected to go public sometime this year.
Software development iterations and data backup for GIS web applications: Part 1
Posted on October 07, 2011 by Vinodh Subramanian
Enterprise GIS web application development is about how to represent your existing data and how to capture new data. Here at Farallon, we have successfully built some very smart geospatial web applications that guide the user in that process.
This post is about how we support the backing up of data for these advanced geospatial apps in order to facilitate the web app development life cycle as well as the production life cycle.
In the early years of GIS web development at Farallon, it was easy to do (or not do) data backup since most projects were single iterations. As the sophistication and mission-critical nature of GIS web apps grew, we starting working on projects that involved multiple iterations, with different developers working on different items of the application/database concurrently.
The above diagram represents the initial architecture to support projects that had multiple iterations.
The steps followed for every iteration were:
- Apply schema and data changes to the database
- Check in the changes as sql files into source control.
- Publish the changes to the server side developers so that they know how to interact with the data e.g. ERD diagram
- Once the iteration is complete label it as such in source control so that there is stable db version that we can roll back to for that iteration.
In the above architecture it's very clear that schema/data integrity become very important so that developers can concentrate on what they are working on and not worry about the application breaking down or not working correctly due to discrepancies in its source (the database).
For example let's take this scenario from our archaeological sites management system.
Let's say there is are three tables: Entity with EntityId, Name and Type as the three columns; EntityGeometry with GeometryId, EntityId, Geometry; and mapping table called EntityAddress with StreetNo, StreetName, City, State and Zipcode.
Developer 1 is responsible for building a web form to fill the Entity and EntityGeometry tables. Developer 2 is responsible for creating the web form to enter the EntityAddress table.
Let's say there is a new requirement for the Entity table schema to include a creation date and retirement date. So the db Developer goes in and adds creation and retirement dates, making them non-null. Developer 1 works on adding the creation/retirement dates to the webform.
It's needless to say that you will have to first add an Entity to the database before you can add an EntityAddress to that Entity. So if Developer 2 tries to test his code for a new Entity, it will break since the webform in source control doesn't have the EntityAddress yet but the database does.
From the above scenario it's clear that the above architecture has its flaws.
In order to resolve that issue a replica of the dev environment was created in each of the developer's development box.
To maintain schema/data integrity schema changes were maintained as .sql files and data was maintained as .dat files in source control. Deployment scripts were created to facilitate the process of backing up schema/data with one click to a bat file.
Lets look at the archeological site management scenario with this new model.
- Db Developer can now make the schema change and data change and run the bat file to back up the changes to the schema/data and check it in to source control.
- Developer 1 can do the update, run the dat file to restore the new schema and data and work on updating the web form to contain the EntityAddress information.
- Developer 2 can wait until Developer 1 completes the web form changes before he gets the latest from source control for both the db and web form changes thus allowing him to test his code without issues.
Some other advantages of the above model are that:
- Developers can, with lots of freedom make changes to the data to test multiple scenarios for the component they are building.
- By separating the dat files into development and production versions it makes it easy to maintain pristine data of the end users without worrying about it during iterations.
Hopefully I was able to convey how this new model has made both the database and server developers life easier. In the next couple of blogs I will take about specific implementations for Postgres and Sql Server.