ProConcepts Geometry - kataya/arcgis-pro-sdk GitHub Wiki

The ArcGIS.Core.Geometry namespace contains the geometry classes and members for creating, modifying, deleting, and converting geometry objects as well as spatial operators and methods to manipulate ArcGIS.Core.Geometry geometry instances using GeometryEngine.Instance.

ArcGIS.Core.dll

Language:      C#
Subject:       Geometry
Contributor:   ArcGIS Pro SDK Team <[email protected]>
Organization:  Esri, http://www.esri.com
Date:          11/24/2020
ArcGIS Pro:    2.7
Visual Studio: 2017, 2019

In this topic

Immutable geometries

Geometries are immutable. This means they are read only. Once instantiated, their content cannot be changed.

Why are geometries immutable? The vast majority of geometries in the system are actually static and unchanging—they are features in a geodatabase or a non-editable layer, or returned from a query, geocode, network trace, or geoprocessing tool. Using immutable geometries reflects reality in most cases and helps make behaviors more predictable. Immutable geometries do not change (even by accident). Because geometries cannot change, there is no need for event handlers/listeners to deal with changing geometries in cases where different parts of the system have a reference to essentially the same geometry. Another advantage of immutable geometry objects is their inherent thread safety. Immutable objects are simpler to understand and avoid potential concurrency issues in the multithreaded environment of ArcGIS Pro.

Building geometries

Builder classes

Since geometries are immutable; meaning that you cannot change the instance of a geometry or any of its properties once it's created; the API provides builder classes for each geometry type (including spatial references). The builder classes are flexible and represent a geometry that is being constructed or edited. They also provides a consistent way of creating and modifying geometries.

When building geometries, there are typically two scenarios. You know the entire state of the geometry up front (e.g. you have the x, y and z coordinates of the point you want to create, or you have the set of polylines to create a polygon) or you have a workflow that requires defining the geometry in a step-by-step process (e.g. you want to build a multi-part polygon). If you know the entire state of the geometry up front, you can use static convenience methods on the builder classes to generate an instance of a geometry. In the case where your workflow requires defining the geometry in a step-by-step workflow, you must create an instance of the appropriate geometry builder class and use that to modify the geometry properties before creating an immutable geometry instance via its ToGeometry() method.

Convenience methods can run on any thread, builder classes need to run on the MCT thread.

Let's start by examining the simplest of geometries; the MapPoint and the MapPointBuilder. Here are some examples of using both the convenience methods and the MapPointBuilder class. Note the values of the HasZ, HasM and HasID properties of the resulting MapPoint which are derived according to the function parameters.

  // create a point with x,y 
  MapPoint pt1 = MapPointBuilder.CreateMapPoint(1.0, 2.0);
  // pt1.HasZ = false
  // pt1.HasM = false
  // pt1.HasID = false

  // create a point with x,y,z
  MapPoint pt2 = MapPointBuilder.CreateMapPoint(1.0, 2.0, 3.0);
  // pt2.HasZ = true
  // pt2.HasM = false
  // pt2.HasID = false

  // create a point with x,y,z,m
  MapPoint pt3 = MapPointBuilder.CreateMapPoint(1.0, 2.0, 3.0, 4.0);
  // pt3.HasZ = true
  // pt3.HasM = true
  // pt3.HasID = false

  MapPoint pt4 = null;
  MapPoint pt5 = null;
  MapPoint pt6 = null;

  // Builder constructors need to run on the MCT.
  ArcGIS.Desktop.Framework.Threading.Tasks.QueuedTask.Run(() =>
  {
    // create a point with x,y
    using (MapPointBuilder mb = new MapPointBuilder(1.0, 2.0))
    {
      // properties on the builder are derived from the parameters
      // mb.HasZ = false
      // mb.HasM = false
      // mb.HasID = false

      pt4 = mb.ToGeometry();

      // properties on the MapPoint are as per the builder properties
      // pt4.HasZ = false
      // pt4.HasM = false
      // pt4.HasID = false
    }
    // create a point with x,y,z
    using (MapPointBuilder mb = new MapPointBuilder(1.0, 2.0, 3.0))
    {
      // properties on the builder are derived from the parameters
      // mb.HasZ = true
      // mb.HasM = false
      // mb.HasID = false

      pt5 = mb.ToGeometry();

      // properties on the MapPoint are as per the builder properties
      // pt5.HasZ = true
      // pt5.HasM = false
      // pt5.HasID = false
    }
    // create a point with x,y,z,m
    using (MapPointBuilder mb = new MapPointBuilder(1.0, 2.0, 3.0, 4.0))
    {
      // properties on the builder are derived from the parameters
      // mb.HasZ = true
      // mb.HasM = true
      // mb.HasID = false

      pt6 = mb.ToGeometry();

      // properties on the MapPoint are as per the builder properties
      // pt6.HasZ = true
      // pt6.HasM = true
      // pt6.HasID = false
    }
  });

  // create a point from another point 
  MapPoint pt7 = MapPointBuilder.CreateMapPoint(pt6);
  // properties on the MapPoint are derived from the parameters
  // pt7.HasZ = true
  // pt7.HasM = true
  // pt7.HasID = false

  // Builder constructors need to run on the MCT.
  ArcGIS.Desktop.Framework.Threading.Tasks.QueuedTask.Run(() =>
  {
    // create a point from another point
    using (MapPointBuilder mb = new MapPointBuilder(pt2))
    {
      // properties on the builder are derived from the parameters
      // mb.HasZ = true
      // mb.HasM = false
      // mb.HasID = false

      pt8 = mb.ToGeometry();

      // properties on the MapPoint are as per the builder properties
      // pt8.HasZ = true
      // pt8.HasM = false
      // pt8.HasID = false
    }
  });

What if we want to create a copy of a MapPoint but ensure the Z value is 10? Clearly the convenience methods cannot be used - they always produce an immutable geometry which cannot be altered. This is where the power of the geometry builder classes lie. They give you the ability to manipulate and set properties for the geometry before it is created. Here is a snippet showing how a cloned MapPoint with a Z value of 10 is created.

  MapPoint ptWithZ = null;

  // Builder constructors need to run on the MCT.
  ArcGIS.Desktop.Framework.Threading.Tasks.QueuedTask.Run(() =>
  {
    // create a point from another point
    using (MapPointBuilder mb = new MapPointBuilder(pt))
    {
      // initially HasZ, HasM, HasID properties on the builder are derived according to the 
      // HasZ, HasM, HasID values of the 'pt' parameter

      // we want a point with Z value of 10

      // explicitly set the HasZ attribute on the builder
      mb.HasZ = true;
      // set the Z value
      mb.Z = 10;
      // return the geometry
      ptWithZ = mb.ToGeometry();

     // ptWithZ.Z = 10
     // ptWithZ.HasZ = true
     // ptWithZ.HasM = (inherited from pt)
     // ptWithZ.HasID = (inherited from pt)
    }
  });

Note that just setting the Z value on the MapPointBuilder does not automatically set the HasZ attribute, it has to be set explicitly in order to ensure the resulting point will have HasZ true. (Assume we don't know the value of pt.HasZ.)

Next, lets look at the Polyline and the PolylineBuilder classes. As with the MapPoint, if you know the entire geometry up front you have the option of using one of the many CreatePolyline convenience methods on the PolylineBuilder class, otherwise create an instance of the PolylineBuilder class, manipulate properties and then use the ToGeometry() method to obtain the polyline.

As we saw earlier, the HasZ, HasM, HasID attributes are derived from the parameters when using the convenience methods. For builder classes other than the MapPointBuilder, the attributes are derived ONLY when the parameter is of the same geometry type as the builder, otherwise they default to false. Lets review a few examples of creating polylines.

  List<MapPoint> list3D = new List<MapPoint>();
  list3D.Add(MapPointBuilder.CreateMapPoint(1.0, 1.0, 1.0, 2.0));
  list3D.Add(MapPointBuilder.CreateMapPoint(1.0, 2.0, 3.0, 6.0));
  list3D.Add(MapPointBuilder.CreateMapPoint(2.0, 2.0, 1.0, 2.0));
  list3D.Add(MapPointBuilder.CreateMapPoint(2.0, 1.0, 1.0, 2.0));

  Polyline p = PolylineBuilder.CreatePolyline(list3D);
  // attributes are defined from the parameters 
  // (list of MapPoints with HasZ = true, HasM = true)
  // p.HasZ = true
  // p.HasM = true
  // p.HasID = false

  Polyline pWithZM = null;

  // Builder constructors need to run on the MCT
  ArcGIS.Desktop.Framework.Threading.Tasks.QueuedTask.Run(() =>
  {
      PolylineBuilder pb = new PolylineBuilder(list3D);
      // because the parameters are NOT the same geometry type as the builder
      // pb.HasZ, pb.HasM, pb.HasID default to false

      p = pb.ToGeometry();
      // p.HasZ = false
      // p.HasM = false

      // if we want a polyline to have HasZ, HasM we need to set 
      // the attribute on the builder explicitly
      pb.HasZ = true;
      pb.HasM = true;

      pWithZM = pb.ToGeometry();
      // now the resulting polyline HasZ, HasM are true
      // pWithZM.HasZ = true
      // pWithZM.HasM = true
  });

  Polyline pClone = PolylineBuilder.CreatePolyline(pWithZM);
  // attributes are defined from the parameters
  // pClone.HasZ = true
  // pClone.HasM = true
  // pClone.HasID = false

  // Builder constructors need to run on the MCT
  ArcGIS.Desktop.Framework.Threading.Tasks.QueuedTask.Run(() =>
  {
      PolylineBuilder pb = new PolylineBuilder(pWithZM);
      // because the parameter matches the geometry type of the builder
      // the attribute values are derived from the parameter

      // pb.HasZ = true
      // pb.HasM = true
      // pb.HasID = false

      Polyline p2 = pb.ToGeometry();
      // p2.HasZ = true
      // p2.HasM = true
      // p2.HasID = false
  });

The process of determining when attributes awareness is derived is important. Here is a summary of the rules:

  • If using the empty builder constructor (eg var pb = new PolylineBuilder()) you must set the HasZ, HasM, HasID properties before ToGeometry is called in order for the returned geometry to have the appropriate attributes.
  • If using a builder constructor that takes parameters that is NOT the same geometry type (eg var bp = new PolylineBuilder(points)) you must set the HasZ, HasM, HasID properties before ToGeometry is called in order for the returned geometry to have the appropriate attributes.
  • If using a builder constructor that takes parameters that IS the same geometry type (eg var bp = new PolylineBuilder(polyline)) the geometry returned inherits the HasZ, HasM, HasID attributes of the input geometry unless you change them explicitly on the builder.
  • If using a convenience method (eg var p = PolylineBuilder.CreatePolyline(points)) the geometry returned has the HasZ, HasM and HasID attributes of the input geometries.

Builder classes exist for all the other geometry types - PolygonBuilder, EnvelopeBuilder, MultipointBuilder, MultipatchBuilder and GeometryBagBuilder. Each of these builder classes have numerous overloads on their constructors along with multiple convenience methods to facilitate geometry creation.

There are also additional builder classes for EllipticArcs, CubicBeziers and LineSegments.

There is also a SpatialReferenceBuilder for building SpatialReference objects.

Second generation builder classes

Starting at ArcGIS Pro 2.5, a new generation of builder classes started to be introduced. These remove the requirement for the builder to be created on the MCT, meaning the second generation builders can be run on any thread.

Currently the following classes follow this pattern MapPointBuilderEx, MultipatchBuilderEx. The MultipointBuilderEx class is available at ArcGIS Pro 2.6 and the EnvelopeBuilderEx class is available at ArcGIS Pro 2.7. Builders following this pattern for the remaining geometry types will be added in subsequent releases.

Building a polygon from points

In this topic, you'll learn how to create Polygon and Polyline geometries from scratch using Coordinate and MapPoint geometries. You'll learn how to access the segments of a polygon and how to change its content using the builder classes. Finally, you'll apply some spatial operations using the GeometryEngine.Instance to form a new polygon.

Since you're creating a geometry of type Polygon, the use of PolygonBuilder is the obvious choice.

The polygon you're going to build is a rectangle:

Build A Rectangle

The polygon consists of four points with four linear segments connecting the points. The points are described by their coordinates containing the x and y values for the location using the Web Mercator spatial reference.

Looking at the PolygonBuilder class, you'll notice multiple overloaded constructors helping you to initialize the builder. At the same time, you'll also notice static convenience methods that allow you to fully describe the state of the geometry and produce a geometry in a single line of code.

The polygon is described by the four corners, so you need to create the points first. Points are geometries of type MapPoint and use a builder class of type MapPointBuilder. The MapPoint class contains the location values and the spatial reference information. The SpatialReference is immutable as well and uses a similar builder approach as the other types of geometry.

Light-weight alternatives, Coordinate2D and Coordinate3D, are also available. These structs are useful when you want to avoid the overhead of creating a MapPoint for use in the construction of a higher-level geometry such as Polygon or Polyline. Because they are structs, Coordinate2D and Coordinate3D possess the same advantages as immutable geometries with respect to thread safety. Coordinate2D and Coordinate3D can be freely passed to any thread within ArcGIS Pro.

With this information, you can now create the polygon with the following code using Coordinate2D:

// Create a spatial reference using the WKID (well-known ID) 
// for the Web Mercator coordinate system.
var mercatorSR = SpatialReferenceBuilder.CreateSpatialReference(3857);

// Create a list of coordinates describing the polygon vertices.
var vertices = new List<Coordinate2D>();
vertices.Add(new Coordinate2D(-13046167.65, 4036393.78));
vertices.Add(new Coordinate2D(-13046167.65, 4036404.5));
vertices.Add(new Coordinate2D(-13046161.693, 4036404.5));
vertices.Add(new Coordinate2D(-13046161.693, 4036393.78));

// Use the builder to create the polygon object.
var polygon = PolygonBuilder.CreatePolygon(vertices, mercatorSR);

or using MapPoint:

// Create a spatial reference using the WKID (well-known ID) 
// for the Web Mercator coordinate system.
var mercatorSR = SpatialReferenceBuilder.CreateSpatialReference(3857);

// Use the builder to create points that will become vertices.
var corner1Point = MapPointBuilder.CreateMapPoint(-13046167.65, 4036393.78, mercatorSR);
var corner2Point = MapPointBuilder.CreateMapPoint(-13046167.65, 4036404.5, mercatorSR);
var corner3Point = MapPointBuilder.CreateMapPoint(-13046161.693, 4036404.5, mercatorSR);
var corner4Point = MapPointBuilder.CreateMapPoint(-13046161.693, 4036393.78, mercatorSR);

// Create a list of all map points describing the polygon vertices.
var points = new List<MapPoint>(){corner1Point, corner2Point, corner3Point, corner4Point};

// use the builder to create the polygon container
var polygonFromBuilder = new PolygonBuilder(points);

There are a couple of things to note. When using coordinates to build the polygon, it is assumed you will provide a spatial reference when calling CreatePolygon. When using map points for constructing the geometry, you don't need to explicitly state the spatial reference as the information can be taken from the coordinate system of the points. It is also assumed that the spatial reference of the points is the same or null. (If you provide MapPoints with different spatial references, a System.InvalidOperationException will be thrown.)

The order of the coordinates/map points is important. Based on the above sketch, the order is Point1, Point2, Point3, and Point4. Point1 and Point2 form line segment 1, Point 2 and Point 3 form line segment 2, and so on. The direction of the segments is oriented clock-wise and as such describe an exterior part of the polygon. Polygons are closed, meaning their last segment goes back to the start point of the first segment. Even though you only provided four points, enough to construct three segments, the polygon itself has four segments and five points. The additional information is generated by the builder to close the polygon.

Based on the previous explanation, consider the difference if you used a polyline builder as opposed to a polygon builder with the same set of points.

// Create a spatial reference using the WKID (well-known ID) 
// for the Web Mercator coordinate system.
var mercatorSR = SpatialReferenceBuilder.CreateSpatialReference(3857);

// Create a list of coordinates describing the polyline vertices.
var vertices = new List<Coordinate2D>();
vertices.Add(new Coordinate2D(-13046167.65, 4036393.78));
vertices.Add(new Coordinate2D(-13046167.65, 4036404.5));
vertices.Add(new Coordinate2D(-13046161.693, 4036404.5));
vertices.Add(new Coordinate2D(-13046161.693, 4036393.78));

// Use the builder to create the polyline object.
var polyline = PolylineBuilder.CreatePolyline(vertices, mercatorSR);

The result is now the following:

Rectangle with PolylineBuilder

The line geometry still has one part, but now it has only the original four points and three linear segments you provided. The fourth segment was not created by the builder.

Changing segments in a geometry

From the Polygon class, you get read-only access to the parts of the polygon. Polygon parts, or rings, can be retrieved as either segment collections or point collections derived from the segment collection vertices. The builder classes allow read-write access to the properties of a geometry.

Using the builder class, you can add and remove segments to reshape the geometry. Here you remove the last segment indicated by the -1 argument with the additional argument closeGap = false so as not to close the polygon. You replace the deleted segment with an elliptic arc constructed with three points, which you add at the end of the polygon build segment collection, effectively closing the geometry.

// Remove the last segment - line segment - from the 1st part of the polygon
// without closing the polygon, the intent is to replace the segment.
polygonFromBuilder.RemoveSegment(0, -1, false);

// Create a coordinate through which the elliptic arc needs to pass.
var interiorCoordinate = new Coordinate2D(-13046164.6, 4036383);
// Build a new segment of type EllipticArc.
var arcSegment = EllipticArcBuilder.CreateEllipticArcSegment(corner4Point, corner1Point, interiorCoordinate, mercatorSR);

// Add the new segment into the polygon.
polygonFromBuilder.AddSegment(arcSegment);

The resulting polygon looks like this:

Resulting polygon

Representing geometries with curves as JSON objects

For a description of the JSON representation for points, multipoints, and linear segment based polylines and polygons, please refer to the ArcGIS REST API documentation.

In ArcGISPro Geometry SDK , a circular arc, an elliptic arc and a Bézier curve can be represented as a JSON curve object. A curve object is given in a compact “curve to” manner with the first element representing the “to” (end) point. The “from” (start) point is derived from the previous segment or curve object.

The supported curve objects are as follows:

  • Circular Arc "c"
    Converted to EllipticArcSegment class
    Defined by end point and an interior point where interior point is a point on the arc between the start point and the end point

    {"c": [[x, y, <z>, <m>], [interior_x, interior_y]]}

  • Arc "a"

    • Elliptic Arc
      Converted to EllipticArcSegment class
      Defined by

      • end point
      • center point
      • minor: 1 if the arc is minor, 0 if the arc is major
      • clockwise: 1 if the arc is oriented clockwise, 0 if the arc is oriented counterclockwise
      • rotation: angle of rotation of major axis in radians with a positive value being counterclockwise
      • axis: length of the semi-major axis
      • ratio: ratio of minor axis to major axis

      {"a": [[x, y, <z>, <m>], [center_x, center_y], minor, clockwise, rotation, axis, ratio]}

    • Circular Arc (old format)
      Special case of elliptic arc
      Converted to EllipticArcSegment class
      Exclude rotation, axis, ratio

      {"a": [[x, y, <z>, <m>], [center_x, center_y], minor, clockwise]}

  • Bézier Curve "b"
    Converted to CubicBezierSegment class
    Defined by end point and two control points

    {"b": [[x, y, <z>, <m>], [x, y], [x, y]]}

Polyline with Curves

A JSON string representing a polyline with curves contains an array of curvePaths and an optional "spatialReference". Each curve path is represented as an array containing points and curve objects.

{"curvePaths": [start point, next point or curve object, … ] }

Examples

  1. A polyline which is a circular arc from (0, 0) to (3, 3) through (1, 4).
{"curvePaths": [[[0,0], {"c": [[3,3], [1,4]]}]]}

Polyline 1

  1. A polyline containing a line segment from (6, 3) to (5, 3), a Bézier curve from (5, 3) to (3, 2) with control points (6, 1) and (2, 4), a line segment from (3, 2) to (1, 2) and an elliptic arc from (1, 2) to (0, 2) with center point (0, 3), minor = 0, clockwise = 0, rotation = 2.094395102393195 (120 degrees), semi-major axis = 1.78, ratio = 0.323.
{
  "curvePaths": 
  [[
    [6,3], [5,3],
    {"b": [[3,2], [6,1], [2,4]]},
    [1,2],
    {"a": [[0,2], [0,3],0,0,2.094395102393195,1.78,0.323]}
  ]]
}

Polyline 2

Polygon with Curves

A JSON string representing a polygon with curves contains an array of curveRings and an optional spatialReference. Each curve ring is represented as an array containing points and curve objects.

{"curveRings": [ [ start point, next point or curve object, … ] ] }

Example

A multipart polygon with m-values. The first part contains three line segments, a Bézier curve from (11, 12) to (15, 15) with control points (10, 17) and (18, 20) which is closed with a line segment to (11, 11). The second part contains a circular arc from (22, 16) to (17, 15) through (22, 14) which is closed with a line segment to (22, 16).

{
  "hasM": true,
  "curveRings": 
  [
    [
      [11,11,1], [10,10,2], [10,11,3], [11,12,4],
      {"b": [[15,15,5], [10,17], [18,20]]},
      [11,11,1]
    ],
    [
      [22,16,1],
      {"c": [[17,15,2], [22,14]]},
      [22,16,1]
    ]
  ] 
}

Polygon

Multipatches

See the Multipatches Concepts document for specifics about working for multipatch geometries.

GeometryEngine.Instance

Use GeometryEngine.Instance for performing geometric operations. The methods of this interface allow the user to discover relations between two geometries (for example, Touches, Within, Contains, and so on) as well as to construct new geometries based on topological relationships between existing geometries (for example, Union, ConvexHull, Cut, and so on). The result of each operation is a new geometry instance.

Performing spatial operations

Polyline and Polygon inherit from Geometry.Multipart, meaning they can be comprised of more than one part. In the first step, you created a polygon from an instance of a polygon builder.

You will now use GeometryEngine.Instance to create new geometries, which you'll assemble in a final geometry. First, you move your polygon by 15 points (or 15 meters based on the units of the spatial reference) in the y-direction.

// Move an existing polygon in the y-direction.
var movedPolygon = GeometryEngine.Instance.Move(polygon, 0, 15) as Polygon;

Because geometries are immutable, the resulting movedPolygon is a new geometry instance.

Here's another example. You create a polygon that is half the size of the original geometry.

// Scale an existing polygon around the label point, 
// i.e., reduce the polygon by 50%.
var smallerPolygon = GeometryEngine.Instance.Scale(polygon, GeometryEngine.Instance.LabelPoint(polygon), .5, .5);

Add the moved polygon as a new part into the polygonFromBuilder instance.

// Add a second part into the polygon builder.
polygonFromBuilder.AddPart(movedPolygon.Points);

As a last step, you'll perform a topological operation by calculating the difference between the shrunken polygon, smallerPolygon, and the composite two-part polygon created from your original points and the movedPolygon. Since a geometry is expected, you need to convert (or build) the current state of the builder into an instance by calling its ToGeometry() method.

// Use GeometryEngine.Instance to cut a hole out of the first part.
var finalPolygon = GeometryEngine.Instance.Difference(polygonFromBuilder.ToGeometry(), smallerPolygon);

The difference operation will cut a hole out of the polygonFromBuilder geometry. You'll use the subtracted area of the hole as your third part of the finalPolygon geometry describing an interior part. The segments of the interior part are oriented counter-clockwise.

The resulting polygon geometry and the orientation of the segments looks like the following:

Final Polygon

Performing relational operations

The predefined relational operations in GeometryEngine.Instance are Contains, Crosses, Disjoint, Equal, Intersects, Overlaps, Touches, and Within. At 1.2, GeometryEngine.Instance has a new method called Relate, which allows you to create custom relational operations. You can read more about the Relate method at Relate and the dimensionally extended nine intersection model (DE 9IM).

To see how the relational operations work, first review the definitions of dimensionality, interiors, boundaries, and exteriors for the basic geometry types.

Dimensionality

  • All point and multipoint shapes are zero dimensional.
  • All polyline shapes are one dimensional.
  • All polygon shapes are two dimensional.

Note that the presence of z-coordinates or m-coordinates does not affect the dimensionality of the geometry.

Interiors, boundaries, and exteriors

Each type of geometry has an interior, a boundary, and an exterior, which are important in understanding relational operators.

  • Point—A point represents a single location in space. The interior of a point is the point itself, the boundary is the empty set, and the exterior is all other points.

  • Multipoint—A multipoint is an ordered collection of points. The interior of a multipoint is the set of points in the collection, the boundary is the empty set, and the exterior is the set of points that are not in the collection.

  • Polyline—A polyline is an ordered collection of paths where each path is a collection of contiguous segments. A segment has a start and an end point. The boundary of a polyline is the set of start and end points of each path, the interior is the set of points in the polyline that are not in the boundary, and the exterior is the set of points that are not in the boundary or the interior. For the polyline shown below, the set of points comprising the boundary is shown in red. The interior of the polyline is shown in black.

  • Polygon—A polygon is defined by a collection of rings. Each ring is a collection of contiguous segments such that the start point and the end point are the same.

The boundary of a polygon is the collection of rings by which the polygon is defined. The boundary contains one or more outer rings and zero or more inner rings. An outer ring is oriented clockwise, while an inner ring is oriented counter-clockwise. Imagine walking clockwise along an outer ring. The area to your immediate right is the interior of the polygon and to your left is the exterior.

Similarly, if you were to walk counter-clockwise along an inner ring, the area to your immediate right is the interior of the polygon and to your left is the exterior.

In the following images, the blue geometry is A, and the red geometry is B.

Predefined relational operations

The predefined relational operations in GeometryEngine.Instance are

  • Contains—One geometry contains another if the contained geometry is a subset of the container geometry and their interiors have at least one point in common. Contains is the inverse of Within.

  • Crosses—Two polylines cross if they meet at points only, and at least one of the shared points is internal to both polylines. A polyline and polygon cross if a connected part of the polyline is partly inside and partly outside the polygon.

  • Disjoint—Two geometries are disjoint if they don’t have any points in common.

  • Equals—Two geometries are equal if they occupy the same space.

  • Intersects—Two geometries intersect if they share at least one point in common.

  • Overlaps—Two geometries overlap if they have the same dimension, and their intersection also has the same dimension but is different from both of them.

  • Touches—Geometry A touches Geometry B if the intersection of their interiors is empty, but the intersection of Geometry A and Geometry B is not empty

  • Within—One geometry is within another if it is a subset of the other geometry and their interiors have at least one point in common. Within is the inverse of Contains.

Relate and the Dimensionally Extended Nine-Intersection Model (DE 9IM)

At 1.2, GeometryEngine.Instance has a new method called Relate, which allows you to create custom relational operations using a Dimensionally Extended Nine-Intersection Model, DE-9IM, formatted string. Note that at this time, the GeometryEngine.Instance.Relate method does not support geometries with curves, so you must first densify the geometry if it contains curve segments.

All of the predefined relational operations can be defined (to include Contains, Crosses, Disjoint, Equals, Intersects, Overlaps, Touches, and Within) using the GeometryEngine.Instance.Relate method, but it offers much more. A review of the predefined relational operations can be found at Performing relational operations.

An explanation of DE-9IM, as well as examples, are given below. More information about DE-9IM can be found at https://en.wikipedia.org/wiki/DE-9IM or by downloading the OGC specification “OpenGIS Simple Features Specification For SQL Revision 1.1” from http://www.opengis.org.

For any geometry A, let I(A) be the interior of A, B(A) be the boundary of A, and E(A) be the exterior of A. For any set x of geometries, let dim(x) be the maximum dimension (-1, 0, 1, or 2) of the geometries in x, where -1 is the dimension of the empty set. A DE-9IM has the following form:

For example, consider two overlapping polygons and the associated DE-9IM.

Image

The intersection of the interiors, I(A)∩I(B), is a polygon that has dimension 2. The intersection of the interior of A and the boundary of B, I(A)∩B(B), is a line that has dimension 1, and so forth.

A pattern matrix represents all acceptable values for the DE-9IM of a spatial relationship predicate on two geometries. The possible pattern values for any cell such that x is the intersection set are {T, F, *, 0, 1, 2} where

  • T => dim(x) ϵ {0, 1, 2}, i.e., x is not empty
  • F => dim(x) = -1, i.e. x is empty
    • => dim(x) ϵ {-1, 0, 1, 2}, i.e., don’t care
  • 0 => dim(x) = 0
  • 1 => dim(x) = 1
  • 2 => dim(x) = 2

The pattern matrix can be represented as a string of nine characters listed row by row from left to right. For example, the pattern matrix given above for overlapping polygons can be represented by the string “212101212”. The string representing two geometries, not necessarily polygons, that overlap is “T*T***T**” .

The GeometryEngine.Instance.Relate method has the following signature:

bool Relate(Geometry geometry1, Geometry geometry2, string relateString)

where relateString is a string representation of a pattern matrix.

If the spatial relationship between the two geometries corresponds to the values as represented in the string, the Relate method returns true. Otherwise, the Relate method returns false.

In the following examples, the blue geometry is Geometry A, and the red geometry is Geometry B.

Example 1: Does Geometry A contain Geometry B?

Recall that Geometry A contains Geometry B if:

  • Geometry B is a subset of Geometry A, and
  • Their interiors have at least one point in common

Clearly, for Geometry A to contain Geometry B the set, I(A)∩I(B), representing the intersection of the interiors must not be empty. Also, if no part of B is outside of A, the intersection of the exterior of A with the interior of B and the boundary of B should be empty. In other words, E(A)∩I(B) must be empty, and E(A)∩B(B) must be empty. There are no other requirements, so the rest of the cells are filled with *.

Pattern matrix for Contains relationship

The string that you pass to the GeometryEngine.Instance.Relate method is “T*****FF*”.

Example 2: Does Geometry A completely contain Geometry B?

The power of the GeometryEngine.Instance.Relate method is that you can create custom relationships. Suppose you want to know if A completely contains B.

As before, B must be a subset of the interior of A so, I(A)∩I(B) must not be empty, E(A)∩I(B) must be empty, and E(A)∩B(B) must be empty (or T*****FF*).

Now there is the extra requirement that Geometry A completely contains Geometry B. This means that the boundary of A must not intersect the interior or the boundary of B. In other words, the intersection of the boundary of A with the interior of B and the boundary of B must be empty or B(A)∩I(B) must be empty and B(A)∩B(B) must be empty. Add two more entries to the DE-9IM matrix giving:

Pattern matrix for Completely Contains relationship

The string that you pass to the GeometryEngine.Instance.Relate method is now “T**FF*FF*”.

Example 3: Does Geometry A touch Geometry B?

Recall that two geometries, A and B, touch if the intersection of their interiors is empty, but the intersection of A and B is not empty. The first requirement is that the intersection of their interiors, I(A)∩I(B), is empty. Given the first requirement, what does it mean to say that the intersection of A and B is not empty? It means that one of the following must be true:

  • Boundary of A intersect interior of B is not empty
  • Interior of A intersect boundary of B is not empty
  • Boundary of A intersect boundary of B is not empty
Case 1: Boundary of A intersect interior of B is not empty

The DE-9IM matrix for this case is:

The string that you pass to the GeometryEngine.Instance.Relate method is “F**T*****”.

What geometry types does this case apply to? The boundary of A is not empty, so you know that A is not a point or a multipoint. Therefore, A is a polygon or a polyline. B cannot be a polygon because then the interiors would intersect. If A is a polygon, then B must be a point or multipoint. If A is a polyline, then B must be a point, multipoint, or polyline.

Case 2: Interior of A intersect boundary of B is not empty

The DE-9IM matrix for this case is:

The string that you pass to the GeometryEngine.Instance.Relate method is “FT*******”.

Case 2 is the inverse of Example 1. B is a polygon or polyline. If B is a polygon, then A is a point or multipoint. If B is a polyline, then A is a point, multipoint, or a polyline.

Case 3: Boundary of A intersect boundary of B is not empty

The DE-9IM matrix for this case is:

The string that you pass to the GeometryEngine.Instance.Relate method is “F***T****”.

What geometry types does this example apply to? Neither A nor B has an empty boundary, so both A and B must be a polygon or a polyline.

The relate string for Touches is “F**T*****” or “FT*******” or “F***T****”. Which string you use depends on the geometry types. For example, to find out if Point A touches Polygon B, you pass the string “FT*******” to the GeometryEngine.Instance.Relate method.

Example 4: Polygon A touches Polygon B at points only

From Example 3, you know that the relate string for Polygon A touches Polygon B is “F***T****”. Now you are specifying that not only do their boundaries intersect, they intersect at points only. In other words, the dim(B(A)∩B(B)) = 1.

The DE-9IM matrix for this example is:

The string that you pass to the GeometryEngine.Instance.Relate method is “F***1****”.

Example 5: Polyline A is disjoint from Multipoint B

In this example, you want to know if Geometry A and Geometry B are disjoint but only if A is a polyline and B is a multipoint.

Clearly, the interiors and boundaries must not intersect. This is the matrix so far:

Notice that the intersection of the interior of A and the exterior of B is equal to A. In other words, I(A)∩E(B) = A and dim(A) = 1. You can fill in another cell of the matrix.

The intersection of the boundary of A and the exterior of B is equal to the boundary of A or B(A)∩E(B) = B(A). The boundary of A is the set of endpoints of the polyline, so dim(B(A)) = 0. Another cell gets filled in.

Looking at the last row in the matrix, you see that E(A)∩I(B) = B, E(A)∩B(B) is the empty set, and E(A)∩E(B) is everything except A and B.

The completed DE-9IM matrix is:

The string that you pass to the GeometryEngine.Instance.Relate method is “FF1FF00F2”.

Acceleration to improve performance of relational operations

What is acceleration?

Acceleration is the process of constructing data structures used during relational operations, such as a spatial index, and pinning them in memory to be reused. This process can have performance benefits when performing relational operations.

When should you use acceleration?

Acceleration is only applicable to the relational operations, i.e. Contains, Crosses, Disjoint, Disjoint3D, Equals, Intersects, Relate, Touches, Within (there is no harm in passing an accelerated geometry to any other operation, the acceleration structures are ignored).

If you are performing relational operations on the same geometry several times by comparing it to different geometries, you may want to accelerate the geometry. Accelerating a geometry can be time consuming, so if you are going to use the geometry in a relational operation only once or twice, then don't accelerate it. The time to accelerate versus the performance gain also depends on the number of vertices in the geometry. If the geometry has a fairly large number of vertices, say more than 10,000, then you can see a performance gain from acceleration even if you are using it only ten times. If the geometry has a small number of vertices, say less than 200, you may not see a performance gain from acceleration unless you are using it more than 100 or 200 times.

How do you accelerate a geometry?

To accelerate a geometry, call GeometryEngine.Instance.AccelerateForRelationalOperations. A copy of the original geometry is returned with the proper data structures already created.

You can accelerate more than one geometry, but the relational operation will benefit only if the first argument passed to the function is the accelerated geometry.

Example

Suppose you have a polygon and a list of points, and you want to see which points intersect the polygon. If there are more than just a few points in the list, then this is a situation where acceleration will be a benefit to the performance of the operation.

//acquire the polygon
var polygon = ....;
//acquire the list of points
var listOfPoints = ...;

//accelerate the polygon
var acceleratedPolygon = 
      GeometryEngine.Instance.AccelerateForRelationalOperations(polygon);

//test "Intersects"
List<int> intersectedPoints = new List<int>();
int numPoints = listOfPoints.Count;

for (int i = 0; i < numPoints; i++)
{
  //note the accelerated polygon is passed in as the first argument
  bool intersects = GeometryEngine.Instance.Intersects(
                                            acceleratedPolygon, listOfPoints[i]);
  if (intersects)
    intersectedPoints.Add(i);
}

SimplifyAsFeature and IsSimpleAsFeature

The output of the GeometryEngine.Instance.SimplifyAsFeature method is a “simple” geometry. Similarly, the method GeometryEngine.Instance.IsSimpleAsFeature determines if the input geometry is “simple”.

What is a simple geometry?

A simple geometry is one that is topologically correct so that it can be stored in a geodatabase. Furthermore, some operations may have undefined behavior if the input geometry is not simple.

  • An empty geometry is simple.
  • A non-empty geometry must have finite x- and y-coordinates to be simple.
  • Assuming all x- and y-coordinates are finite:
    • A MapPoint is simple.
    • A multipoint with no duplicate points is simple.
    • A polyline with no degenerate segments is simple. Given a segment in a polyline, if the HasZ property of a polyline is false or the segment is a curve, then it is degenerate if its 2D-length is less than or equal to 2 * xy-resolution of the spatial reference. If the HasZ property of a polyline is true and the segment is a line segment, then it is degenerate if its 2D-length is less than or equal to 2 * xy-resolution and its 3D-length is less than or equal to the z-tolerance of the spatial reference. For a quick reference, think of a degenerate segment this way:
      • Not 3D and 2D-length <= 2 * xy-resolution
      • Is 3D, is line and 2D-length <= 2 * xy-resolution and 3D-length <= z-tolerance
      • Is 3D, not a line and 2D-length <= 2 * xy-resolution
    • A polygon is simple if it has the following properties:
      • Exterior rings are clockwise and interior rings (holes) are counterclockwise. The order of the rings doesn’t matter.
      • If a ring touches another ring, it does so at a finite number of points.
      • If a ring is self-tangent, it does so at a finite number of points, and there are vertices at those points.
      • All segments have length > the xy-tolerance of the spatial reference.
      • Vertices are either exactly coincident or further than the xy-tolerance from each other.
      • Each ring has at least three non-equal vertices.
      • No empty rings.

Examples

Let's look at some examples of non-simple vs. simple polygons. The green circles are the vertices of the polygon, and the lavender colored area represents the interior of the polygon.

Non-simple polygons
Non-simple polygon Non-simple polygon Non-simple polygon Non-simple polygon Non-simple polygon
Self-intersection Self-intersection Dangling segment Dangling segment Overlapping rings
Simple polygons
Simple polygon Simple polygon Simple polygon Simple polygon Simple polygon
No self-intersection Self-intersection at vertex No dangling segment No dangling segment No overlapping rings

Rendering polygons in ArcGIS Pro

ArcGIS Pro uses the even-odd rule for rendering polygons. The even-odd rule determines the interior of a polygon for drawing purposes only. How it works is you draw a ray from a point on the polygon in any direction towards infinity. Then you count the number of paths of the polygon that the ray crosses. If this number is odd, then the point is on the interior of the polygon. If this number is even, then the point is on the exterior of the polygon. Why is this important? It means that the orientation of the rings in a polygon doesn't matter when drawing the polygon, and overlapping rings will be drawn as a hole.

Example
Part 1 Part 2 Part 3 Polygon
Odd => interior ring Even => exterior ring Odd => interior ring Rendered polygon in ArcGIS Pro

Datum transformation

Geographic transformation

The most common form of datum transformation is the geographic transformation. The geographic transformation is a mathematical operation converting coordinates from one geographic coordinate system into a different system based on a different data/spheroid.

When the spheroids are different, your data is unprojected from the projected coordinate system (PCS) A1 into the geographic coordinate system A. Then it is converted to the latitude and the longitude values from GCS (Geographic Coordinate System) A to GCS B. This requires a geographic, or datum, transformation. The last step is to project GCS B into PCS B2. All of this is done under the covers when you use the Project operation.

In the following code sample, you're using a Mercator projection based on the European reference system as the input. The output coordinate system is the Web Mercator projection based on the world geodetic system (WGS) from 1984. The datum transformation transformation is then used to reproject the European location into a system that uses a WGS 84 reference system.

// Use this UTM spatial reference using the ETRS 1989 (European Terrestrial Reference System) datum.
// This will be the input spatial reference.
SpatialReference etrs_utmZone32N = SpatialReferenceBuilder.CreateSpatialReference(5652);

// Define the Web Mercator spatial reference using the WGS 1984 datum.
// This will be the output spatial reference.
SpatialReference webMercator = SpatialReferenceBuilder.CreateSpatialReference(3857);

// Set up the datum transformation to be used in the projection.
ProjectionTransformation transformation = ProjectionTransformation.Create(etrs_utmZone32N, webMercator);

// Define a location in Germany.
var mapPointInGermany = MapPointBuilder.CreateMapPoint(32693081.69, 5364738.25, SpatialReferenceBuilder.CreateSpatialReference(5652));

// Perform the projection of the initial map point.
var projectedPoint = GeometryEngine.Instance.ProjectEx(mapPointInGermany, transformation);

Vertical Transformation

Beginning with ArcGIS Pro 1.4, the Project operator will consider the height component of the coordinates in a geometry being projected. The height component is represented as a z-coordinate. The coordinate system of the height is called a vertical coordinate system (VCS), and it is part of the spatial reference object. To learn more about vertical coordinate systems, visit the ArcGIS help page What are vertical coordinate systems?

A vertical coordinate system can be referenced to two different types of surfaces known as datums: gravity-related (geoidal) or spheroidal (ellipsoidal). A gravity-related VCS may set its zero point through a local mean sea level or a benchmark. A spheroidal VCS defines heights that are referenced to a spheroid of a geographic coordinate system (GCS). To learn more about vertical datums, visit the ArcGIS help pages Vertical datums and Geoid.

In order to project the z-coordinates of a geometry, the input and output spatial references must have a vertical coordinate system in addition to a horizontal coordinate system. If the input and output vertical coordinate systems used in the Project operator differ from one another, a vertical datum transformation is used to transform the z-coordinates. A list of all geographic and vertical coordinate systems can be found on the ArcGIS help page Geographic Coordinate Systems, and a list of all geographic and vertical transformations can be found on the Geographic Transformations page. Another helpful list is that of all projected coordinate systems which can be found on the ArcGIS help page Projected Coordinate Systems.

Some transformations require grid files that are not installed by default and require a separate installation. Download the "ArcGIS Pro Coordinate Systems Data" setup from my.esri.com and choose which grid files to install.

Modifications to the ArcGIS.Core.Geometry namespace

To project the height or z-coordinates of a geometry, both the input and output spatial references must have vertical coordinate systems, and you must call the GeometryEngine.Instance.ProjectEx method and pass a ProjectionTransformation object.

Two new classes have been added to the ArcGIS.Core.Geometry namespace, HVDatumTransformation and CompositeHVDatumTransformation. A CompositeHVDatumTransformation contains one or more HVDatumTransformation objects. Either a HVDatumTransformation or a CompositeHVDatumTransformation can be used to create a ProjectionTransformation object which is then passed to the GeometryEngine.Instance.ProjectEx method.

If the required transformation is unknown, create the ProjectionTransformation from the input and output spatial references and, possibly, an extent of interest. A default transformation is selected based on the spatial references and extent of interest.

Example 1

Project a z-enabled polyline, i.e. its HasZ property is true, and the vertical transformation is known.
Suppose there is a polyline in the Pacific Ocean using the horizontal coordinate system WGS 84 and the vertical coordinate system EGM 84. You want to project the polyline to the NAD 83 horizontal coordinate system and NAD 83 PA 11 vertical coordinate system.

Example 1 Polyline to project

The transformations to use are specified in the instructions. Use the inverse of WGS_1984_To_EGM_1984_Geoid_1 and the forward transformation WGS_1984_(ITRF08)_To_NAD_1983_PA11.

Here is a code sample to perform the projection.

// Create input spatial reference with horizontal GCS_WGS_1984, vertical EGM84_Geoid
SpatialReference inSR = SpatialReferenceBuilder.CreateSpatialReference(4326, 5798);

// Create the polyline to project
List<Coordinate3D> coordinates = new List<Coordinate3D>()
{
  new Coordinate3D(-160.608606, 21.705238, 3000),
  new Coordinate3D(-159.426811, 21.075439, 5243),
  new Coordinate3D(-156.151956, 20.765497, 10023),
  new Coordinate3D(-155.511224, 19.526748, 13803)
};

Polyline polyline = PolylineBuilder.CreatePolyline(coordinates, inSR);

// Create output spatial reference with horizontal GCS_NAD_1983_PA11, vertical NAD_1983_PA11 
SpatialReference outSR = SpatialReferenceBuilder.CreateSpatialReference(6322, 115762);

// Create a composite horizontal/vertical transformation
List<HVDatumTransformation> hvTransforms = new List<HVDatumTransformation>()
{
  HVDatumTransformation.Create(110008, false),
  HVDatumTransformation.Create(108365, true)
};

CompositeHVDatumTransformation compositeHVTransform = CompositeHVDatumTransformation.Create(hvTransforms);

// Create the projection transformation from the composite horizontal/vertical transformation
ProjectionTransformation projectionTransformation = ProjectionTransformation.CreateEx(inSR, outSR, compositeHVTransform);

// Now project the polyline. Call ProjectEx to transform the z-coordinates as well as the xy-coordinates.
Polyline projectedPolyline = GeometryEngine.Instance.ProjectEx(polyline, projectionTransformation) as Polyline;

// Print the coordinates of the projected polyline
Console.WriteLine("Input polyline: ");
ReadOnlyPointCollection points = polyline.Points;
foreach (MapPoint p in points)
  Console.WriteLine("(" + p.X + ", " + p.Y + ", " + p.Z + ")");

Console.WriteLine("Output polyline: ");
points = projectedPolyline.Points;
foreach (MapPoint p in points)
  Console.WriteLine("(" + p.X + ", " + p.Y + ", " + p.Z + ")");

Output from the code sample is the following:

Input polyline: 
(-160.608606, 21.705238, 3000)
(-159.426811, 21.075439, 5243)
(-156.151956, 20.765497, 10023)
(-155.511224, 19.526748, 13803)

Output polyline: 
(-160.608588836171, 21.7052329085819, 3004.24183148582)
(-159.426793801716, 21.0754339009134, 5247.3481890005)
(-156.151938777436, 20.7654919748718, 10023.6100020141)
(-155.511206748496, 19.5267429579366, 13804.0194606541)

Example 2

Project only the z-coordinate of a point and the vertical transformation is unknown.
In this case, let the system choose the best vertical transformation based on the input and output spatial references.

Consider the point (50, 41, 10), that is, x = 50, y = 41, z = 10. The point is in the horizontal coordinate system WGS 84 and vertical coordinate system Baltic (depth). The point is located in the Caspian Sea.

Example 2 Point to project

Project only the height of the point, so the output horizontal coordinate system will be the same as the input, WGS 84, and the output vertical coordinate system will be Caspian (height).

Here is a code sample to perform the projection.

// Create the input spatial reference with horizontal GCS_WGS_1984, vertical Baltic_depth
SpatialReference inSR = SpatialReferenceBuilder.CreateSpatialReference(4326, 5612);

// Create the point to project
MapPoint point = MapPointBuilder.CreateMapPoint(50, 41, 10, inSR);

// Create the output spatial reference with horizontal GCS_WGS_1984, vertical Caspian_height
SpatialReference outSR = SpatialReferenceBuilder.CreateSpatialReference(4326, 5611);

// Create the projection transformation from the spatial references.
ProjectionTransformation projectionTransformation = ProjectionTransformation.CreateWithVertical(inSR, outSR);

// Now project the point. Call ProjectEx to transform the z-coordinate.
MapPoint projectedPoint = GeometryEngine.Instance.ProjectEx(point, projectionTransformation) as MapPoint;

// Print the coordinates of the points
Console.WriteLine("Input point: (" + point.X + ", " + point.Y + ", " + point.Z + ")");
Console.WriteLine("Output point: (" + projectedPoint.X + ", " + projectedPoint.Y + ", " + projectedPoint.Z + ")");
Output from the code sample is the following:
Input point: (50, 41, 10)
Output point: (50, 41, 18)

Summary

  • All geometry instances are read-only (immutable), so they cannot be changed once created.
  • Geometry builder classes allow you to grow or modify a geometry.
  • Polygon and Polyline are multipart geometries, and each part contains one or more segments and two or more points.
  • SegmentCollection can be any mix of two point lines, elliptic arcs, or cubic Beziers.
  • Polygon parts are always closed, so the end point of the very last segment of each part or ring coincides with the start point of the first segment of that part or ring.
  • GeometryEngine.Instance contains convenience methods for performing spatial (relational and topological) operations.
  • To project the height or z-coordinates of a geometry, both the input and output spatial references must have vertical coordinate systems, and Geometry.ProjectEx with an argument of type ProjectionTransformation must be called. If the transformation is unknown, create the ProjectionTransformation from the input and output spatial references and, possibly, the extent of interest. A transformation will be picked based on the spatial references and extent of interest.
⚠️ **GitHub.com Fallback** ⚠️