ProConcepts Geodatabase - Esri/arcgis-pro-sdk GitHub Wiki
Functionality that uses the fine-grained Geodatabase API. Geodatabase API functionality is found in ArcGIS.Core.dll. The Geodatabase API is commonly used in conjunction with map exploration, map authoring, and editing.
Language: C#
Subject: Geodatabase
Contributor: ArcGIS Pro SDK Team <[email protected]>
Organization: Esri, http://www.esri.com
Date: 10/06/2024
ArcGIS Pro: 3.4
Visual Studio: 2022
- Architecture
- Datastore
- Working with feature data
- Editing datastores
- Miscellaneous Topics
-
The ArcGIS.Core.Data API is primarily focused on DML (Data Manipulation Language). Limited DDL (Data Definition Language) capabilities are described in the DDL ProConcepts document.
-
Almost all of the methods in the ArcGIS.Core.Data API should be called on the Main CIM Thread (MCT), as stated in the API reference. These method calls should be wrapped inside the QueuedTask.Run call. Failure to do so will result in ConstructedOnWrongThreadException being thrown.
In the majority of cases you can safely rely on the built-in garbage collection provided by .NET to handle memory management issues. The garbage collector, when it runs, reclaims all the memory from “dead” .NET objects as well as compacting the memory used by “live” objects to reduce the size of the managed heap (you can read more about garbage collection here). However, because the Pro SDK Core.Data API uses unmanaged resources (i.e. resources not managed by garbage collection), they must be explicitly released by the application. Unmanaged resources include file locks, database connections, and network connections, amongst others.
There are two patterns in .NET for handling unmanaged resources, an implicit control pattern that uses Object.Finalize, and an explicit control pattern that uses IDisposable.Dispose. The ArcGIS.Core.Data API provides both. The preferred pattern is to explicitly release the underlying unmanaged resources by calling Dispose once you are done using them. Dispose frees up the unmanaged resources releasing any underlying file locks or active database connections (You can also use a “using” construct that will call Dispose for you - there are many examples of “using” in the Pro snippets and samples).
The implicit control pattern that uses Finalize should be considered a fail-safe to allow unmanaged resources to still be freed even if the developer forgets to call Dispose or chooses not to. However, the unmanaged resources will not be freed until the object is garbage collected (at some future point in time) and so any file locks or database connections held by the object will remain in use until it is finalized. Depending on the implicit release of unmanaged resources can lead to unexpected behavior, such as resources still being locked, or connections consumed even though the object that acquired them has since gone out of scope.
Although the usage of Dispose may appear obvious, developers should be careful about code that unintentionally acquires Core.Data instances. For example, consider code that chains together sequences of .NET calls to navigate across the geodatabase hierarchy. The following two statements that appear to be convenient and compact are actually problematic:
Geodatabase gdb = featureLayer.GetFeatureClass().GetDatastore() as Geodatabase;
and
var id = row.GetTable().GetID();
In the first case, a feature class instance is acquired and in the second case, a table instance is acquired and neither of them is disposed. The developer is probably unaware that they indirectly instantiated a feature class and/or table instance that will hold on to their unmanaged resources until such a time as they are garbage collected. Instead, the correct way to code these statements is to explicitly acquire the Core.Data instances as variables and Dispose of them after their use. A using statement is a convenient way of accomplishing this:
using (FeatureClass featureClass = featureLayer.GetFeatureClass())
using (Geodatabase gdb = featureClass.GetDatastore() as Geodatabase)
{
// etc.
}
using (Table table = row.GetTable())
{
var id = table.GetID();
// etc.
}
Another benefit to the using statement is that it will ensure Dispose is called even if your code, executing within the scope of the using, throws an exception.
Another use case to consider is when an API method returns a list of Core.Data objects. For example, consider Table.GetControllerDatasets()
, which returns an IReadOnlyList<Dataset>
. In this case, a using statement is not sufficient, and each of the list items must be individually disposed.
IReadOnlyList<Dataset> controllerDatasets = table.GetControllerDatasets();
// Do something with the list
foreach (Dataset dataset in controllerDatasets)
{
dataset.Dispose();
}
Once Dispose has been called on an instance, calling any of its methods or properties that access the unmanaged resource(s) (which is just about all of them) will result in an ArcGIS.Core.ObjectDisconnectedException
.
Most of the classes in the geodatabase API reside in the ArcGIS.Core.Data
namespace. There are some cases where the objects in ArcGIS.Core.Data
need to integrate with ArcGIS Pro at a higher level in the architectural stack. In these cases, C# extension methods are provided. To use these extension methods:
- Add a reference to
ArcGIS.Desktop.Extensions
to your solution - Add
using ArcGIS.Core.Data.Extensions;
to the top of your source files
These steps allow these extension methods to appear as if they were regular methods on geodatabase classes.
These extension methods are intended for use in ArcGIS Pro Add-ins, and cannot be used with CoreHost applications.
Within Pro, geodatabase content can be browsed or searched on disk either via folder connections, file paths, or the browse dialog. In all cases, browsing or searching for geodatabase content results in ArcGIS.Desktop.Core.Items
being retrieved. To access the underlying geodatabase dataset from an Item, please refer to ProConcepts Content and Items
A datastore is a container of spatial and non-spatial datasets, such as feature classes, raster datasets, and tables.
In the ArcGIS.Core.Data API, Datastore is an abstract class that represents any object that serves as a container for datasets. For example, Geodatabase inherits from Datastore and supports the file geodatabase, mobile geodatabase, enterprise geodatabase as well as feature service geodatabase Datastore types. It is important to note that not all Datastore
types support all types of attribute fields or storage type properties. The GetDatastoreProperties()
method returns the DatastoreProperties
class that specifies which properties are supported by which Datastore
type. Similarly, some Datastore
types may not support datastore properties at all. You can check if they are supported by calling the Datastore.AreDatastorePropertiesSupported
property.
// Check if a datastore supports datastore properties
bool areDatastorePropertiesSupported = geodatabase.AreDatastorePropertiesSupported();
if (areDatastorePropertiesSupported)
{
DatastoreProperties datastoreProperties = geodatabase.GetDatastoreProperties();
// Supports 64-bit integer field
bool supportsBigInteger = datastoreProperties.SupportsBigInteger;
// Supports pagination
bool supportsQueryPagination = datastoreProperties.SupportsQueryPagination;
// Supports datastore edit
bool canEdit = datastoreProperties.CanEdit;
// Supports 64-bit Object ID
bool supportsBigObjectId = datastoreProperties.SupportsBigObjectID;
// Supports DateOnly field
bool supportsDateOnly = datastoreProperties.SupportsDateOnly;
// Supports TimeOnly field
bool supportsTimeOnly = datastoreProperties.SupportsTimeOnly;
// Supports TimestampOffset field
bool supportsTimestampOffset = datastoreProperties.SupportsTimestampOffset;
}
Note: Datastore is an abstraction that is conceptually equivalent to Workspace in the ArcObjects API.
Conceptually, an ArcGIS geodatabase is a collection of geographic datasets of various types held in a common file system folder, administered by a REST service, or stored in a multiuser relational DBMS (such as Oracle, Microsoft SQL Server, PostgreSQL, SAP HANA, or IBM DB2). Geodatabases come in many sizes, have varying numbers of users, and can scale from small, single-user databases built on files up to larger workgroup, department, and enterprise geodatabases accessed by many users.
In the ArcGIS.Core.Data API, the Geodatabase class represents the native data structure for ArcGIS and is the primary data format used for editing and data management. While ArcGIS works with geographic information in numerous geographic information system (GIS) file formats, it is designed to work with and leverage the capabilities of the geodatabase.
File geodatabases, mobile geodatabases, enterprise geodatabases and feature service geodatabases can be opened using the Geodatabase class, which exposes an overloaded list of constructors to support different types of geodatabases.
To open a file geodatabase, an instance of FileGeodatabaseConnectionPath should be passed to the Geodatabase constructor as follows:
Geodatabase fileGeodatabase = new Geodatabase(new FileGeodatabaseConnectionPath(new Uri(@"path\to\the\file\geodatabase")));
To open a mobile geodatabase, an instance of MobileGeodatabaseConnectionPath should be passed to the Geodatabase constructor as follows:
Geodatabase mobileGeodatabase = new Geodatabase(new MobileGeodatabaseConnectionPath(new Uri(@"path\to\the\mobile\geodatabase")));
There are two ways to open an enterprise geodatabase, via an SDE connection file or a set of connection properties:
Geodatabase enterpriseGeodatabaseViaConnectionFile = new Geodatabase(new DatabaseConnectionFile(new Uri(@"path\to\the\sde\file")));
DatabaseConnectionProperties connectionProperties = new DatabaseConnectionProperties(EnterpriseDatabaseType.SQLServer)
{
AuthenticationMode = AuthenticationMode.DBMS,
Instance = "machineName\\instanceName",
Database = "databaseName",
User = "username",
Password = "Not1234"
Version = "dbo.DEFAULT"
};
Geodatabase enterpriseGeodatabaseViaConnectionProperties = new Geodatabase(connectionProperties);
As an aside, if you have a connection file and wish to extract the connection properties, this can be accomplished with the static method DatabaseClient.GetDatabaseConnectionProperties(). This can be useful if you wish to replace the User and Password properties inside a connection file. You cannot open a connection to an enterprise geodatabase with a connection file that is missing user name and password information.
Another way of obtaining the Geodatabase object is through ArcGIS.Core.Data.Dataset.GetDatastore(). This returns the ArcGIS.Core.Data.Datastore reference. If the underlying datastore is a geodatabase, you can cast it to a Geodatabase to access the Geodatabase interface.
QueryDef is a construct that allows for querying the Geodatabase Datastore to obtain a cursor. QueryDefs can be used to generate a cursor from a single table based on a query with a where clause, prefix clause, postfix clause, subfields, and the table on which the query should be executed. They can also be used to create a join between two or more tables within the geodatabase.
The QueryDef class provides the properties to specify the query details. The Evaluate method on the Geodatabase class takes a QueryDef object and returns the RowCursor providing access to the rows that satisfy the query. The Evaluate method also has a Boolean parameter to specify whether recycling should be used while returning the successive rows from the cursor.
QueryDef queryDef = new QueryDef
{
Tables = "Highways",
WhereClause = "TYPE = 'Paved Undivided'",
};
using (RowCursor rowCursor = geodatabase.Evaluate(queryDef, false))
{
while (rowCursor.MoveNext())
{
using (Row row = rowCursor.Current)
{
Feature feature = row as Feature;
Geometry shape = feature.GetShape();
string type = Convert.ToString(row["TYPE"]); // Will be "Paved Undivided" for each row.
try
{
Table table = row.GetTable(); // Will always throw exception because rows retrieved from QueryDef do not have a parent table.
}
catch (NotSupportedException exception)
{
// Handle not supported exception.
}
}
}
}
The following are some things to consider when performing a QueryDef evaluation:
- There cannot be more than one column of the same name in the subfields in the QueryDef for enterprise geodatabases.
- Field Aliases are not supported in the Subfields property on the QueryDef.
- When there is a Shape field specified in the subfields, the objects returned from the RowCursor.Current are Feature objects.
- When there is a join involved, only the Shape field of the left side table in the join is supported for QueryDef evaluation.
Note: Since geometries are immutable, they are managed by the .NET Garbage Collector. They are not recyclable in the same sense as Row, which implements IDisposable (recycling expects that the processing for the object is completed before the memory is reclaimed, and this becomes impossible in a multithreaded environment since it can be used concurrently). In future releases, support may be added for recycling geometries.
Query tables are virtual tables that represent queries involving one or more tables from the same geodatabase. When the query table is created, it is returned as a read-only table or feature class depending on whether or not a shape column is involved. Some of the intended primary uses for the query table are adding the query table to the map as a layer, using it as any other read-only table or feature class to perform a search, and using the table to work with geoprocessing analysis tools. Selections against QueryTables are not supported on enterprise geodatabases. When added to the map, query tables are persisted within the project when saved. If changes are made to the tables involved in the query, they are reflected in the query table since it is a virtual table.
To create a query table, the first step is to create a QueryDef. The QueryDef is then used to create a QueryTableDescription. Other properties that can be set include the name of the query table, and a comma-delimited list of key fields to manufacture the ObjectID. When the MakeCopy boolean property is set to true, and the key fields are not set, a local client-side copy of the data is used to generate the query table. Finally, the QueryTableDescription is used to obtain an ArcGIS.Core.Data.Table by invoking Geodatabase.OpenQueryTable(QueryTableDescription).
using (Geodatabase geodatabase = new Geodatabase(new FileGeodatabaseConnectionPath(new Uri("path\\to\\gdb"))))
{
QueryDef queryDef = new QueryDef
{
Tables = "CommunityAddress JOIN MunicipalBoundary on CommunityAddress.Municipality = MunicipalBoundary.Name",
SubFields = "CommunityAddress.OBJECTID, CommunityAddress.Shape, CommunityAddress.SITEADDID, CommunityAddress.ADDRNUM, CommunityAddress.FULLNAME, CommunityAddress.FULLADDR, CommunityAddress.MUNICIPALITY, MunicipalBoundary.Name, MunicipalBoundary.MUNITYP, MunicipalBoundary.LOCALFIPS",
};
QueryTableDescription queryTableDescription = new QueryTableDescription(queryDef)
{
Name = "CommunityAddrJounMunicipalBoundr",
PrimaryKeys = geodatabase.GetSQLSyntax().QualifyColumnName("CommunityAddress", "OBJECTID")
};
Table queryTable = geodatabase.OpenQueryTable(queryTableDescription);
}
The Database Datastore represents an enterprise database or a SQLite database. It provides access to database capabilities such as opening tables and feature classes, obtaining definitions, listing tables, and so on, but it does not support geodatabase functionality such as versioning, archiving, validation, relationship classes, and so on. While an enterprise geodatabase could be opened through the Database Datastore, this approach is not recommended because it would only provide access to the capabilities associated with the database.
An instance of Database Datastore can be opened using one of the three constructors that take the following type of Connector -- DatabaseConnectionProperties, DatabaseConnectionFile or SQLiteConnectionPath.
DatabaseConnectionProperties specifies the enterprise database platform (specified by the EnterpriseDatabaseType enum), and the connection details specific to that platform can be specified using the properties on DatabaseConnectionProperties.
// SQL Server
DatabaseConnectionProperties databaseConnectionProperties = new DatabaseConnectionProperties(EnterpriseDatabaseType.SQLServer)
{
AuthenticationMode = AuthenticationMode.DBMS,
Instance = "machine\\instance",
Database = "database",
User = "username",
Password = "password"
};
Database database = new Database(databaseConnectionProperties);
DatabaseConnectionFile accepts a Uri to the file location of a .sde connection file.
Database database = new Database(new DatabaseConnectionFile(new Uri("path\\to\\.sde\\file")));
SQLiteConnectionPath accepts a Uri to the SQLite database file location.
Database database = new Database(new SQLiteConnectionPath(new Uri("path\\to\\sqlite\\file")));
A query layer allows the results of a SQL query to be accessed as a read-only table or feature class. The following steps are used to create a query layer:
- Create a QueryDescription using Database.GetQueryDescription. QueryDescription is described in more detail later in this section.
- Modify the QueryDescription (may not be required, depending on the query) with details such as object ID columns, shape type, SR ID, and spatial reference.
- Pass the QueryDescription to Database.OpenTable to get a table or feature class representing a query layer
One requirement of a query layer is that it must have a unique ID. This can be a natural unique ID (an existing field used as the Object ID in ArcGIS) or a mapped unique ID (a virtual field created by mapping the values of one or more existing fields to an integer). The results of a query must have one of the following:
- A non-nullable integer field (a natural unique ID)
- A combination of one or more integer, string, or GUID fields that result in a unique tuple (a mapped unique ID)
In the following table, a query result with a string field and an integer field that are not individually unique but form unique tuples (and contain no null values) can be used for ID mapping:
County | State | ESRI_OID |
---|---|---|
Adams | Colorado | 0 |
Adams | Ohio | 1 |
Addison | Vermont | 2 |
The following requirements and restrictions apply:
- Uniqueness—If a single field is used (natural or mapped), all values must be unique. If multiple fields are used, the tuples in each row must be unique. If non-unique values are used for mapping, no error will be raised, but unexpected behavior can occur and analysis results may be incorrect.
- No null values—If a mapped ID is used, none of the fields used to create the ID can contain a null value. If a null value is used, an error will occur during mapping.
- Negative natural values—If a natural unique ID is used, the field should not contain negative values. Rows with a natural unique ID less than 0 are ignored.
- Geometry fields—A geometry field is not required; a maximum of one geometry field can be in the result set.
One special case to keep in mind is multiple geometry types in the same shape field. This is generally not valid in ArcGIS Pro, but a query layer can be created by specifying that only rows of one type should be used. For example, if there are both point and polygon geometries in the shape column, the QueryDescription.SetShapeType(GeometryType) method can be used to specify which geometry type should be considered for this query layer.
A QueryDescription is an intermediate object used in the creation of a table or feature class representing a query layer. As explained previously, a QueryDescription is obtained from the Database.GetQueryDescription method. This method auto-detects the properties of the query description if possible. For example, if a non-nullable integer field is found in the query result, the QueryDescription.IsObjectIDMappedColumnRequired() will be false and the QueryDescription.GetObjectIDField() specifies the integer field selected. When ID mapping is required and an appropriate mapping field cannot be determined automatically, the QueryDescription.IsObjectIDMappedColumnRequired() will be true and the QueryDescription.SetObjectIDFields() must be used to specify the mapped unique ID before it can be used to create a query class.
The GetQueryDescription method has the following three overloads:
- GetQueryDescription(string tableName)—Creates a QueryDescription object associated with the single (i.e., standalone) table specified by the tableName argument. In other words, the resulting object will cause a query layer to be created with all the fields of a database table with default inferences for parameters such as ShapeType, SpatialReference, and so on.
- GetQueryDescription(ArcGIS.Core.Data.Table)—Given a reference to an ArcGIS.Core.Data.Table, this method can be used to discover the parameters of the corresponding query layer represented by the Table reference. This can also be useful to customize the query description before creating another table or feature class representing another query layer.
- GetQueryDescription(string queryStatement, string queryLayerName)—When a query layer with custom fields or joins between multiple tables is needed, the first parameter to this method is the string that specifies the required query and the second parameter is the name of the resulting query layer.
The QueryDescription object defines several getters that can be used to determine what the query result will look like; for example, the QueryDescription.GetFields() method returns a read-only list for the query result. QueryDescription also defines the following getters/setters that can be used to define how the query layer will be created:
- GetSpatialReference/SetSpatialReference—The spatial reference that will be applied to the feature class when it is created; to define a spatial reference, the SetSpatialReference method should be used.
- GetShapeType/SetShapeType—The type of geometry found in the shape field. If the shape field contains multiple geometry types, setting this property specifies which subset of rows will be retrieved from the query class (only rows matching a single geometry type can be used).
- GetObjectIDFields/SetObjectIDFields—If a natural ID column is found, GetObjectIDFields will return its name. If mapping is required, the SetObjectIDFields method should be used to specify the field(s) to use.
- GetSRID/SetSRID—The SRID corresponding to the spatial reference that will be applied to the feature class when it is created; to specify an SRID, SetSRID should be used.
The Geodatabase class can represent a feature service (i.e., web geodatabase) and provide access to the datasets contained in that service. The Geodatabase class has an overloaded constructor which takes a ServiceConnectionProperties object, which specifies the URL to the feature service and, optionally, the credentials to connect to the feature service. Whether the credentials are required depends on the feature service deployment type.
The three kinds of environments in which feature services can be hosted are as follows:
- ArcGIS Online
- ArcGIS Server sites federated with an ArcGIS Enterprise portal
- Stand-alone ArcGIS Server sites (not federated with a portal)
For feature services hosted on ArcGIS Online, the feature service connection uses the ArcGIS Online credentials with which the user signed in on ArcGIS Pro. In the case of feature services on ArcGIS Server sites federated with an Enterprise portal, the credentials must be specified by the user when adding the portal connection in ArcGIS Pro. For additional details, see Add a portal connection.
ServiceConnectionProperties serviceConnectionProperties = new ServiceConnectionProperties(new Uri("http://federated-server.com/server/rest/services/Hosted/CampusEditing/FeatureServer"));
Geodatabase geodatabase = new Geodatabase(serviceConnectionProperties);
For feature services hosted on a stand-alone ArcGIS server site (which is not federated with an Enterprise portal), the credentials must be passed in by assigning them to the User and Password properties on the ServiceConnectionProperties object.
ServiceConnectionProperties serviceConnectionProperties = new ServiceConnectionProperties(new Uri("https://non-federated-server.com/arcgis/rest/services/UnitedStatesWest/FeatureServer"))
{
User = "username",
Password = "password",
};
Geodatabase geodatabase = new Geodatabase(serviceConnectionProperties);
A shapefile is a simple, non-topological format for storing the geometric location and attribute information of geographic features. Geographic features in a shapefile can be represented by points, lines, or polygons (areas). The datastore containing shapefiles can also contain dBASE tables, which can store additional attributes that can be joined to a shapefile's features.
The only data types supported by shapefile via the FileSystemDatastore are tables and feature classes. Other than feature classes and tables, most dataset types are not supported by shapefile data stores. For example, neither relationship classes nor feature datasets can be opened in shapefile datastores, and, by extension, neither can datasets that require feature datasets (such as topologies and utility networks).
The following include some limitations you should take into consideration:
- Alias and model names are not supported (this includes datasets as well as fields).
- Length and area fields are not maintained for shapefiles.
- Annotation feature classes and dimension feature class are not supported.
- QueryDefs are not supported.
- Domains and rules are not supported.
- Enabling z-values on a Shapefile also requires enabling m-values.
- DateTime field values in shapefiles are not true DateTime values; rather, they are only Date values. If a DateTime value is stored in a shapefile DateTime field, the date portion of the value will be maintained correctly, but the time value will not. To model DateTime values in shapefiles, create one or more additional fields to store the time portion of the value. For example, one field could be used to store the number of seconds since midnight, or three fields could be used to store hours, minutes, and seconds separately.
- Null values are not supported by shapefiles. One approach to working around this is to represent nulls using values that would not typically occur in the data. For example, in a shapefile containing cities, a value of –9999 could be used to represent a null (unknown) population.
- Index names are not maintained on shapefiles. They will have the same name as the fields on which they were created.
- The Validate method on the Table/FeatureClass is not supported.
- The Differences method on the Table/FeatureClass is not supported.
A shapefile can be opened using the FileSystemDatastore constructor, which takes a FileSystemConnectionPath whose constructor is specified as FileSystemDatastoreType.Shapefile:
FileSystemConnectionPath connectionPath = new FileSystemConnectionPath(new Uri("path\\to\\shapefiles\\directory"), FileSystemDatastoreType.Shapefile);
FileSystemDatastore shapefile = new FileSystemDatastore(connectionPath);
The OpenDataset<T> generic method can be used to open a Table or FeatureClass. T must be Table or FeatureClass. If T is any other types, an InvalidOperationException will be thrown. The OpenDataset method can be called on a shapefile with or without the file extension. If the OpenDataset method is called on a .shp file, the object returned is a FeatureClass and can be cast to a FeatureClass reference. If the OpenDataset method is called on a .dbf file (in the absence of a corresponding .shp file), the object returned is a Table object.
To obtain metadata about the shapefile FeatureClass or Table, the GetDefinition<T> generic method can be used to obtain the Definition, which provides details such as the fields, indexes, feature class name, object ID field name, and so on. T must be TableDefinition or FeatureClassDefinition. If T is any other types, an InvalidOperationException will be thrown. The methods on the Definition class obtained from the Shapefile feature class that are not supported are as follows:
- GetAliasName
- GetModelName
- GetCreatedAtField
- GetCreatorField
- GetDefaultSubtypeCode
- GetSubtypeField
- GetSubtypes
- GetEditedAtField
- GetEditorField
- GetGlobalIDField
- HasGlobalID
- IsEditorTrackingEnabled
- IsTimeInUTC
- GetAreaField
- GetLengthField
FileSystemConnectionPath connectionPath = new FileSystemConnectionPath(new Uri("path\\to\\shapefiles\\directory"), FileSystemDatastoreType.Shapefile);
FileSystemDatastore shapefile = new FileSystemDatastore(connectionPath);
FeatureClass footPrints = shapefile.OpenDataset<FeatureClass>("BuildingFootprints");
FeatureClass easements = shapefile.OpenDataset<FeatureClass>("Easements.shp"); // The name can be provided with or without the extension.
TableDefinition easementsDefinition = shapefile.GetDefinition<TableDefinition>("Easements");
Shapefiles always contain a Feature ID (FID) field that can generally be used in the same way as an ObjectID. For example, Selection objects created from feature classes obtained from shapefiles use FIDs in lieu of ObjectIDs. As with ObjectIDs, FIDs cannot be edited, and the value of IField.Type for FID fields returns a value of esriFieldTypeOID.
The major difference between ObjectIDs and FIDs is that whereas ObjectIDs are permanent identifiers of a feature or row in a geodatabase, FIDs simply represent the current (zero-based) position of a feature or a row in a shapefile. This means that a feature can have different FIDs from one session to the next, even if it has not been modified in any way.
Consider a shapefile with an integer field and three point features as shown in the following table:
Feature ID | Object ID |
---|---|
0 | 0 |
1 | 1 |
2 | 2 |
If the feature with an FID of 1 is deleted after edits are stored, the shapefile's attribute table will be changed as shown in the following table:
Feature ID | Object ID |
---|---|
0 | 0 |
1 | 2 |
When a workflow requires that features have a static identifier, use an integer field other than the FID field.
Shapefiles are restricted in the types of fields that can be used relative to a geodatabase. For example, the following field types are not supported:
- Globally unique identifier (GUID)
- GlobalID
- Binary large object (BLOB)
- Raster
Additionally, field names are restricted to 10 characters, and, as previously mentioned, alias and model names are not supported.
The Pro SDK also supports CAD file workspaces, created from a directory containing CAD files. CAD datasets are read-only, and are based on a single DXF, DWG or DGN file. CAD files may be related to ancillary files for spatial references (PRJ) or coordinate transformations (WLD).
A CAD file workspace can be opened using the FileSystemDatastore constructor, which takes a FileSystemConnectionPath whose constructor is specified as FileSystemDatastoreType.Cad:
FileSystemConnectionPath connectionPath = new FileSystemConnectionPath(new Uri("path\\to\\CAD \\directory"), FileSystemDatastoreType.Cad);
FileSystemDatastore cadDatastore = new FileSystemDatastore(connectionPath);
The OpenDataset<T> generic method can be used to open a a particular CAD file as a CadDataset
. T must be CadDataset. If T is any other types, an InvalidOperationException will be thrown.
CAD datasets are interpreted by default into POINTS, POLYLINES, POLYGON, MULTIPATCH and ANNOTATION feature classes based on their geometric type. Feature attributes are assembled from select CAD entity properties into memory as a virtual table. The source geometry may be included in more than one of the default feature classes. For example, all of the features in the POLYGONS feature class are also represented in the POLYLINES feature class. In the case of AutoCAD files, the DWG file might contain explicit ArcGIS feature classes that were created with ArcGIS for AutoCAD or from data exported from ArcGIS Desktop. In this case feature attributes are encoded from the original GIS feature attribute data, or GIS data added from ArcGIS for AutoCAD.
Metadata about the CAD dataset, including the list of available feature classes can be obtained by calling GetDefinition, which returns a CadDatasetDefinition
object. The individual feature classes within a CAD file can be opened with the OpenDataset
method on the CadDataset
.
The Dataset abstract class represents any entity in ArcGIS Pro that is a meaningful and coherent collection of data. For example, ArcGIS.Core.Data.Table inherits from Dataset and represents a collection of data that conforms to a specified schema (for example, each row has the same fields).
The classes that represent Dataset entities and inherit from Dataset are described below.
Tables are a type of dataset containing zero or more rows with one or more columns (or fields). All rows in a table have the same columns and a single value (or no value) associated with each column.
In the geodatabase, attributes are managed in tables based on the following simple, yet essential, relational data concepts:
- Tables contain rows.
- All rows in a table have the same columns.
- Each column has a data type, such as integer, decimal number, character, or date.
- A series of relational functions and operators (such as SQL) is available to operate on the tables and their data elements.
In the ArcGIS.Core.Data API, Table objects can be obtained in the following ways:
- From the Geodatabase object:
Table table = geodatabase.OpenDataset<Table>("TableName");
In case of a Geodatabase object representing a FeatureService, the OpenDataset can be called with the string representation of the ID for the table
Table table = geodatabase.OpenDataset<Table>("2");
- From the FeatureLayer object:
ArcGIS.Desktop.Mapping.Layer selectedLayer = MapView.Active.GetSelectedLayers()[0];
if (selectedLayer is ArcGIS.Desktop.Mapping.FeatureLayer)
{
ArcGIS.Core.Data.Table table = (selectedLayer as FeatureLayer).GetTable();
}
- From the Row object:
ArcGIS.Desktop.Editing.Events.RowChangedEvent.Subscribe(args =>
{
ArcGIS.Core.Data.Row row = args.Row;
ArcGIS.Core.Data.Table table = row.GetTable();
}
Feature classes are homogeneous collections of common features, each having the same spatial representation—such as points, lines, or polygons—and a common set of attribute columns—for example, a line feature class for representing a road centerline. In ArcGIS Pro, the currently supported feature classes that are commonly used in the geodatabase are points, lines, and polygons.
In the following images, points, lines, and polygons are used to represent three datasets for the same area: park and recreation area locations as points, roads as lines, and facility sites as polygons.
In the ArcGIS.Core.Data API, the FeatureClass class inherits from the Table class. Obtaining FeatureClass objects is similar to obtaining Table objects as shown in the following examples:
- From the Geodatabase object:
FeatureClass table = geodatabase.OpenDataset<FeatureClass>("FeatureClassName");
In case of a Geodatabase object representing a FeatureService, the OpenDataset can be called with the string representation of the ID for the feature class
FeatureClass featureClass = geodatabase.OpenDataset<FeatureClass>("2");
- From the FeatureLayer object:
ArcGIS.Desktop.Mapping.Layer selectedLayer = MapView.Active.GetSelectedLayers()[0];
if (selectedLayer is ArcGIS.Desktop.Mapping.FeatureLayer)
{
Table table = (selectedLayer as FeatureLayer).GetTable();
if(table is FeatureClass)
{
FeatureClass featureClass = table as FeatureClass;
}
}
- From the Feature object:
ArcGIS.Desktop.Editing.Events.RowChangedEvent.Subscribe(args =>
{
Row row = args.Row;
if(row is Feature)
{
FeatureClass featureClass = row.GetTable() as FeatureClass;
}
}
You can open a FeatureClass object using the following code:
Table table = geodatabase.OpenDataset<Table>("FeatureClassName");
The table is an ArcGIS.Core.Data.Table reference, but in reality, it's an ArcGIS.Core.Data.FeatureClass object. You could cast the table reference as a FeatureClass, and the cast would work as expected.
A feature dataset is a collection of related feature classes that share a common coordinate system. Feature datasets are used spatially. Their primary purpose is to organize related feature classes into a common dataset for building a topology, a network dataset, or a terrain dataset. Rather than storing data, it acts as a container for other datasets and maintains the extent of its contained datasets and a common spatial reference.
The ArcGIS.Core.Data.FeatureDataset object can only be obtained from the Geodatabase object.
FeatureDataset FeatureDataset = geodatabase.OpenDataset<FeatureDataset>("FeatureDatasetName");
A GIS integrates information about various types of geographic and non-geographic entities, many of which can be related.
- Geographic entities can relate to other geographic entities. For example, a building can be associated with a parcel.
- Geographic entities can relate to non-geographic entities. For example, a parcel of land can be associated with an owner.
- Non-geographic entities can relate to other non-geographic entities. For example, a parcel owner can be assigned a tax code.
Relationship classes in the geodatabase manage the associations between objects in one dataset (feature class or table) and objects in another. Objects at either end of the relationship can be features with geometry or records in a table.
Relationship classes help validate, and in some cases ensure, referential integrity. For example, the deletion or modification of one feature could delete or alter a related feature. Furthermore, a relationship class is stored in the geodatabase, which makes it accessible to anyone who uses the geodatabase. Relationship classes support the following cardinalities:
- One-to-one
- One-to-many
- Many-to-many
Note: Many-to-many relationship classes are considered attributed relationship classes. They can still be opened as relationship classes, but they can also be opened as attributed relationship classes.
The ArcGIS.Core.Data.RelationshipClass object can be obtained from the Geodatabase object. If the name of the relationship class is known, it can be opened using the Geodatabase.OpenDataset<RelationshipClass>
method:
RelationshipClass relationshipClass = geodatabase.OpenDataset<RelationshipClass>("RelClassName");
Alternatively, you can fetch all of the relationship classes between two tables or feature classes by using the OpenRelationshipClasses method. The first parameter is the origin table name or object and the second parameter is the destination table name or object. The return value is an IReadOnlyList of RelationshipClass
objects, because there could be more than one relationship between the specified tables. Note that the order of the parameters makes a difference as to what relationship classes are returned.
IReadOnlyList<RelationshipClass> relationshipClasses = geodatabase.OpenRelationshipClasses("SourceTableName", "DestinationTableName");
IReadOnlyList<RelationshipClass> relationshipClassesAlternate= geodatabase.OpenRelationshipClasses(sourceTableObject, destinationTableObject);
If using a feature service geodatabase, the table names should be layer ID strings or the names of the feature classes as they are exposed via the feature service.
IReadOnlyList<RelationshipClass> srcToDestRelClasses = geodatabase.OpenRelationshipClasses("0", "2"));
IReadOnlyList<RelationshipClass> srcToDestRelClassesAlternate = geodatabase.OpenRelationshipClasses("L3States", "L2Counties");
For more information on working with relationship classes, see Working with relationship classes.
Any relationship class—whether simple or composite and of any particular cardinality—can have attributes. Relationship classes with attributes are stored in a table in the database. This table contains at least the foreign key to the origin feature class or table and the foreign key to the destination feature class or table.
An attributed relationship can also contain other attributes. For example, if you consider an attributed relationship class that relates a feature class that stores water laterals and a feature class that stores hydrants, water lateral objects have their own attributes, and hydrant objects have their own attributes. The relationship class describes which water laterals feed which hydrants. Because you want to store some kind of information about that relationship—such as the type of riser connecting the two—you can store this information as attributes in the relationship class.
In the ArcGIS.Core.Data API, the AttributedRelationshipClass class inherits from RelationshipClass. All RelationshipClasses that have an intermediate table are treated as attributed relationship classes, regardless of whether or not user-defined attributes exist.
The ArcGIS.Core.Data.AttributedRelationshipClass object can be obtained from the Geodatabase object. If the name of the attributed relationship class is known, it can be opened using the Geodatabase.OpenDataset<AttributedRelationshipClass>
method:
AttributedRelationshipClass attrRelationshipClass = geodatabase.OpenDataset<AttributedRelationshipClass>("AttrRelClassName");
Alternatively, you can fetch all of the relationship classes between two tables or feature classes by using the OpenRelationshipClasses method. The first parameter is the origin table name or object and the second parameter is the destination table name or object. The return value is an IReadOnlyList of Relationship class objects, because there could be more than one relationship between the specified tables. Note that the order of the parameters makes a difference as to what relationship classes are returned.
IReadOnlyList<RelationshipClass> relationshipClasses = geodatabase.OpenRelationshipClasses("SourceTableName", "DestinationTableName");
IReadOnlyList<RelationshipClass> relationshipClassesAlternate= geodatabase.OpenRelationshipClasses(sourceTableObject, destinationTableObject);
If using a feature service geodatabase, the table names should be layer ID strings or the names of the feature classes as they are exposed via the feature service.
IReadOnlyList<RelationshipClass> srcToDestRelClasses = geodatabase.OpenRelationshipClasses("0", "2"));
IReadOnlyList<RelationshipClass> srcToDestRelClassesAlternate = geodatabase.OpenRelationshipClasses("L3States", "L2Counties");
For more information on working with relationship classes, see Working with relationship classes.
Realtime dataset and feature class are covered in ProConcepts Stream Layers
Definition is a concept that represents the metadata about the dataset. While the dataset provides a way to access the data that it encapsulates, the definition describes the dataset's schema, unique properties associated with it, and so on.
Note: Definition is an abstraction that is conceptually equivalent to DataElement in the ArcObjects API.
It should be noted that all the classes that inherit from the ArcGIS.Core.Data.Dataset abstract class have methods, which are unique behaviors expected from that dataset. However, all the metadata must be accessed via the Definition abstract class.
One of the reasons for separating the metadata into a Definition is that opening a Definition is a lightweight operation when compared to opening a dataset. For example, if you only need information regarding Polyline FeatureClasses, getting and filtering definitions is more efficient than opening all the FeatureClasses to do the same.
The classes that inherit from the Definition abstract class are as follows:
- TableDefinition—Metadata for an ArcGIS.Core.Data.Table dataset. Some major objects accessible from TableDefinition are Fields, Subtypes, Indexes, names of the ObjectIDField, SubtypeField, GlobalIDField, and so on.
- FeatureClassDefinition—Inherits from TableDefinition and has additional access to metadata specific to FeatureClasses, especially methods to access ShapeField, ShapeType, SpatialReference, Area, Length, and so on.
- FeatureDatasetDefinition—Metadata that provides access to the SpatialReference and Extent of the FeatureDataset.
- RelationshipClassDefinition—Metadata for an ArcGIS.Core.Data.RelationshipClass providing access to information such as the cardinality, origin and destination datasets, Origin Foreign Key field, if it's composite, and if it represents the RelationshipClass relating the dataset and its Attachment Table.
- AttributedRelationshipClassDefinition—Inherits from RelationshipClassDefinition and provides additional access to the Destination Foreign Key field, fields representing the Attribute column names, and so on.
- UnknownDefinition—Represents an attempt to access a definition of a dataset that is not yet supported by the API. This includes geometric networks, cadastral fabrics, and topologies.
The two options for accessing Definitions are as follows:
- Opening a Definition from a Geodatabase—Typically this is used when it is not anticipated that the dataset will be opened.
FeatureDatasetDefinition definition = geodatabase.GetDefinition<FeatureDatasetDefinition>("FeatureDatasetName");
- Opening the Definition from the dataset—This is used when the dataset is already open and the reference is accessible.
ArcGIS.Desktop.Mapping.Layer selectedLayer = MapView.Active.GetSelectedLayers()[0];
if (selectedLayer is ArcGIS.Desktop.Mapping.FeatureLayer)
{
using (Table table = (selectedLayer as FeatureLayer).GetTable())
{
TableDefinition definition = table.GetDefinition();
}
}
Tables, feature classes, and attributed relationship classes are composed of fields, which are synonymous with columns. The fields represent the schema of the corresponding dataset.
In ArcGIS.Core.Data, the fields are always returned as a read-only list of ArcGIS.Core.Data.Field objects accessible by calling GetFields() on the following objects:
- TableDefinition
- FeatureClassDefinition
- AttributedRelationshipClassDefinition
- RowCursor
- RowBuffer
- Row
- Feature
- AttributedRelationship
For the same dataset, the collection of fields obtained at the same time from the following sets of objects should be equivalent:
- TableDefinition, RowCursor, RowBuffer, and Row
- FeatureClassDefinition, RowCursor, RowBuffer, and Feature
- AttributedRelationshipClassDefinition and AttributedRelationship
After obtaining the read-only collection, you can use LINQ to filter the list as shown below:
using (Geodatabase geodatabase = new Geodatabase(new FileGeodatabaseConnectionPath(new Uri("path\\to\\gdb"))))
using (FeatureClass enterpriseFeatureClass = geodatabase.OpenDataset<FeatureClass>("LocalGovernment.GDB.FacilitySite"))
{
FeatureClassDefinition facilitySiteDefinition = enterpriseFeatureClass.GetDefinition();
IReadOnlyList<Field> fields = facilitySiteDefinition.GetFields();
IEnumerable<Field> dateFields = fields.Where(field => field.FieldType == FieldType.Date);
}
Starting at 3.2, support was added for 64-bit field types: 64-bit ObjectID (OID) and BigInteger. The 64-bit OID
and BigInteger
(64-bit integer) fields allow developers to represent larger integer values.
These 64-bit field types are based on IEEE 754 specifications; the standard allocates 1 bit for the sign, 11 for the exponent, and 52 for the significand. Therefore, the maximum possible integer value to write in a geodatabase's 64-bit data field is
9007199254740991
or(2^53)-1
, but they can read values up to18446744073709552000
or(2^64)
.
// Creating a 64-bit Object ID
FieldDescription oidFieldDescription_64 = new FieldDescription("ObjectID", FieldType.OID)
{
Length = 8
};
// Creating a 64-bit integer
FieldDescription bigIntegerFieldDescription = new FieldDescription("BigInt", FieldType.BigInteger);
The 32-bit ObjectID has a length of 4 bytes, and the 64-bit ObjectID has a length of 8 bytes.
The Pro SDK also supports DateOnly
and TimeOnly
fields to represent a specific date or time of day, respectively.
The DateOnly
field represents a particular date without time. This field is ideal for storing specific dates where we are not interested in the time of the event, such as date of birth.
The TimeOnly
field represents a time without a date. This field is ideal for storing events where storing a date is irreverent, as it occurs every day.
//Creating fields
// 9/28/2014 (DateOnly)
FieldDescription earthquakeDateOnlyFieldDescription = new FieldDescription("Earthquake_DateOnly", FieldType.DateOnly);
// 1:16:42 AM (TimeOnly)
FieldDescription earthquakeTimeOnlyFieldDescription = new FieldDescription("Earthquake_TimeOnly", FieldType.TimeOnly);
// Creating a row buffer
rowBuffer["Earthquake_DateOnly"] = new DateOnly(2014, 9, 28);
rowBuffer["Earthquake_TimeOnly"] = new TimeOnly(1, 16, 42);
Note: Oracle Enterprise Geodatabases don't support
DateOnly
andTimeOnly
fields.
The TimeStampOffset field represents a Coordinated Universal Time (UTC) date and time value, with an offset value indicating the difference between the local time and UTC.
// Creating a time stamp offset field
// 9/28/2014 1:16:42.000 AM -09:00
FieldDescription earthquakeTimestampOffsetFieldDescription = new FieldDescription("Earthquake_TimestampOffset_Local", FieldType.TimestampOffset);
// creating a row buffer
rowBuffer["Earthquake_TimestampOffset_Local"] = new DateTimeOffset(2014, 9, 28, 1, 16, 42, 0, TimeSpan.FromHours(-9));
Diagram of the date and time data type components.
Note: Postgres Enterprise Geodatabases don't support
TimestampOffset
field.
See ArcGIS data types for a complete list and additional details about each data type that can be applied to a field using the Pro SDK.
The precision property of a field represents the maximum number of a digits in a number. Scale represents the number of digits to the right of the decimal point in a number. For example, the number 4703338.13 has a scale of 2 and a precision of 9.
FieldDescription fieldDescription = new FieldDescription("LandArea", FieldType.Double)
{
Precision = 9,
Scale = 2
};
File and mobile geodatabases do not support precision or scale values. If you provide a value for precision or scale, it will be ignored.
For date fields, precision refers to their capacity to record millisecond values. A standard date field, FieldType.Date, with precision value of 1 can store time to the millisecond.
//Precision applies to the Date field type
FieldDescription highPrecisionDate = new FieldDescription("EarthQuake_Time", FieldType.Date);
highPrecisionDate.Precision = 1;
highPrecisionDate.SetDefaultValue(new DateTime(2023,07,14,14,54,03,105));
In a geodatabase, you can use rules to enforce different types of constraints on the tables and feature classes. These include rules on attributes and rules on relationships. Rules on attributes are the most common, and they're created using domains. Domains are used to specify permissible values that can be assigned to a field.
The following are the different types of domains in a geodatabase:
- Coded value—Specifies a list of valid values, each with a string representation.
- Range domains—Specifies a range of valid values through minimum and maximum numeric values.
In the ArcGIS.Core.Data API, a ArcGIS.Core.Data.Domain is a first-class entity modeled as an abstract class. There are two classes, RangeDomain and CodedValueDomain, that inherit from the Domain abstract class. The Domain provides access to the Name, FieldType, and Description of the Domain. RangeDomain provides additional access to the Min and Max values. The CodedValueDomain provides access to the code-value pairs returned as a C# Generic Sorted List of <object,string> where the key is the code.
Domains can be accessed in two ways. If a Domain is accessed on the Field object's GetDomain method. It returns a Domain reference, which has to be safely cast based on the underlying Domain type: CodedValueDomain or RangeDomain. The GetDomain method has an optional Subtype parameter. This can be used to obtain the domain assigned for the field corresponding to that subtype value represented by the Subtype object passed in as the argument. If there is no domain assigned for the specified subtype, a null value is returned. If there is no subtype passed in, the domain returned is the default domain for the field. If no domain is assigned to a field, a null value is returned.
Another way to access Domains is via the Geodatabase object's GetDomains method. Calling this method will return a read-only list of Domain instances whose derived types are CodedValueDomain and RangeDomain.
using (Geodatabase geodatabase = new Geodatabase(new FileGeodatabaseConnectionPath(new Uri("path\\to\\gdb"))))
using (FeatureClass enterpriseFeatureClass = geodatabase.OpenDataset<FeatureClass>("LocalGovernment.GDB.FacilitySite"))
{
FeatureClassDefinition facilitySiteDefinition = enterpriseFeatureClass.GetDefinition();
IReadOnlyList<Field> fields = facilitySiteDefinition.GetFields();
Field doubleField = fields.First(field => field.FieldType == FieldType.Double);
Domain domain = doubleField.GetDomain();
if (domain is RangeDomain)
{
RangeDomain rangeDomain = domain as RangeDomain;
// The reason we can directly call convert to double because the domain was obtained from a double Field.
double minValue = Convert.ToDouble(rangeDomain.GetMinValue());
double maxValue = Convert.ToDouble(rangeDomain.GetMaxValue());
}
if (domain is CodedValueDomain)
{
CodedValueDomain codedValueDomain = domain as CodedValueDomain;
SortedList<object, string> codedValuePairs = codedValueDomain.GetCodedValuePairs();
IEnumerable<KeyValuePair<object, string>> filteredPairs = codedValuePairs.Where(pair => Convert.ToDouble(pair.Key) > 20.0d);
double code = Convert.ToDouble(codedValueDomain.GetCodedValue("sampleValue"));
string value = codedValueDomain.GetName(21.45d);
}
}
Subtypes are a way to partition objects in a single table into groups with similar rules and behavior. Although subtypes share a common set of fields, each group can have its own attribute and relationship rules, as well as different default values at creation time. An example of this is parcels of land that are often divided into residential, commercial, and industrial subtypes. Subtypes are defined for a single table or feature class and cannot be shared.
Subtypes for the table or feature class can be obtained via TableDefinition and FeatureClassDefinition. Using the GetSubtypes method returns a read-only list of subtype objects. Each subtype object provides access to the subtype code and the subtype name.
using (Geodatabase geodatabase = new Geodatabase(new FileGeodatabaseConnectionPath(new Uri("path\\to\\gdb"))))
using (FeatureClass enterpriseFeatureClass = geodatabase.OpenDataset<FeatureClass>("Fittings"))
{
FeatureClassDefinition featureClassDefinition = enterpriseFeatureClass.GetDefinition();
Subtype bendSubtype = featureClassDefinition.GetSubtypes().First(subtype => subtype.GetName().Equals("BEND"));
Subtype sleeveSubtype = featureClassDefinition.GetSubtypes().First(subtype => subtype.GetName().Equals("SLEEVE"));
IReadOnlyList<Field> listOfFields = featureClassDefinition.GetFields();
int typeCodeDescFieldIndex = featureClassDefinition.FindField("TYPE_CODE_DESC");
Domain domainForBend = listOfFields[typeCodeDescFieldIndex].GetDomain(bendSubtype);
string bendDomainName = domainForBend.GetName();
Domain domainForSleeve = listOfFields[typeCodeDescFieldIndex].GetDomain(sleeveSubtype);
string sleeveDomainName = domainForSleeve.GetName();
Domain domainForNoSubtype = listOfFields[typeCodeDescFieldIndex].GetDomain();
string masterDomainName = domainForNoSubtype.GetName();
}
The following are the two ways to query data using this API:
- Selection—Access either the entire list of object IDs or the list object IDs for rows/features matching a criteria. Selection is typically used for obtaining object IDs that satisfy a criteria so that you can, for example, highlight the features on the map. The advantage of using selection is that it's a lightweight operation compared to getting the entire row. Selection is also helpful for combining multiple disparate selections from the same Table/FeatureClass into a single selection.
- Search—Used to access not only the object ID, but also values for all or a subset of fields on the Table/FeatureClass. Search is also required when any editing of data has to be performed.
A query filter is used to restrict the records retrieved from the database during a query (with the QueryFilter.WhereClause property), specify which fields of the query result are populated (with the QueryFilter.SubFields property), order the returned result in a certain order (with the QueryFilter.PostfixClause property), and possibly get only unique/distinct results (using the QueryFilter.PrefixClause property). Query filters are used to create cursors and selections and are common throughout the Geodatabase API.
ArcGIS.Core.Data.QueryFilter represents the class for creating a query filter. It is one of the very few objects in the ArcGIS.Core.Data API that is thread-agnostic (that is, you can create a QueryFilter on a non-MCT thread).
QueryFilter queryFilter = new QueryFilter();
string whereClause = "OWNER_NAME = 'ADA IAN'";
queryFilter.WhereClause = whereClause;
string prefixClause = "DISTINCT";
queryFilter.PrefixClause = prefixClause;
string postfixClause = "ORDER BY OWNER_NAME";
queryFilter.PostfixClause = postfixClause;
string subFields = "OBJECTID, OWNER_NAME";
queryFilter.SubFields = subFields;
(Note that using keywords such as Distinct
in prefix and postifx clauses only work on databases that support these keywords.)
ArcGIS.Core.Data.SpatialQueryFilter inherits from ArcGIS.Core.Data.QueryFilter and has additional properties to specify the filter geometry and spatial relationship type (using the ArcGIS.Core.Data.SpatialRelationship enum) to use for the filter geometry to filter the results.
MapPoint minPoint = new MapPointBuilder(-123, 41).ToGeometry();
MapPoint maxPoint = new MapPointBuilder(-121, 43).ToGeometry();
SpatialQueryFilter spatialFilter = new SpatialQueryFilter
{
WhereClause = "OWNER_NAME = 'ADA IAN'",
FilterGeometry = new EnvelopeBuilder(minPoint, maxPoint).ToGeometry(),
SpatialRelationship = SpatialRelationship.Intersects,
};
A Where clause is a component of a Structured Query Language (SQL) statement that defines attribute constraints on a query. Where clauses are used as properties by QueryFilter and SpatialQueryFilter. Where clauses can consist of one-to-many expressions, linked by logical connectors (AND and OR). The following is the valid format for a Where clause expression:
operand1 predicate operand2
Operands can consist of a field name from a table, a numeric constant, a character string, a simple arithmetic expression, a function call, or a subquery.
Predicate | Meaning | Example |
---|---|---|
= | Is equal to | TYPE = 3 |
<> | Is not equal to | PROVINCE <> 'NS' |
>= | Is greater than or equal to | POPULATION >= 10000 |
<= | Is less than or equal to | AVG_TEMP <= 25 |
> | Is greater than | HEIGHT > 10 |
< | Is less than | SPD_LIMIT < 65 |
[NOT] BETWEEN | Is between a minimum and maximum value | DIAMETER BETWEEN 5 AND 10 |
[NOT] IN | Is in a list or the results of a subquery | TYPE IN ('City', 'Town') |
[NOT] EXISTS | Checks a subquery for results | EXISTS (SELECT * FROM PARCELS WHERE TYPE='RES') |
[NOT] LIKE | Matches a string pattern | CITY_NAME LIKE 'Montr_al' |
IS [NOT] NULL | Is value NULL | WEBSITE IS NULL |
The following are the two types of wildcards that can be used with the LIKE predicate:
- Single-character wildcard (underscore, _)
- Multiple-character wildcard (percent sign, %)
Strings containing single-character wildcards evaluate to true if the wildcard corresponds to a single character in the comparison string, whereas strings containing multiple-character wildcards evaluate to true if the wildcard corresponds with zero-to-many characters in the comparison string.
The following are example statements:
- 'Belmont' LIKE '%mont' (evaluates to true)
- 'Belmont' LIKE '%elmont%' (evaluates to true)
- 'Belmont' LIKE 'Belmont_' (evaluates to false)
- 'Belmont' LIKE 'Bel_ont' (evaluates to true)
When writing a where clause that tests for a Guid field, you should always format the Guid as follows:
'{" + guid.ToString().ToUpper() + "}'
The call to ToUpper
is required because some databases store Guids as a native data type, and some use a capitalized string.
The ObjectIDs
property on the QueryFilter class provides a shortcut for fetching a set of rows that correspond to a set of ObjectIDs. Notably, some underlying database systems limit the number of values that can be specified within an SQL in
clause. This property works around this limitation by automatically breaking up the request into multiple requests, as needed. The ObjectIDs
property can also be combined with a Where clause.
public RowCursor SearchingATable(Table table, IReadOnlyList<long> objectIDs)
{
QueryFilter queryFilter = new QueryFilter()
{
ObjectIDs = objectIDs
};
return table.Search(queryFilter);
}
In the ArcGIS.Core.Data API, to obtain a Selection, the Select method must be called on the Table/FeatureClass or on a Selection object. The Select method takes the following two parameters:
- QueryFilter—Can be either an ArcGIS.Core.Data.QueryFilter for Table/FeatureClass or an ArcGIS.Core.Data.SpatialQueryFilter for FeatureClass. The default value is null, which means that the Selection will not filter any object IDs.
-
SelectionOption—Takes an enum value that can be SelectionOption.Normal, SelectionOption.None, or SelectionOption.OnlyOne. This is a default parameter with a default value of SelectionOption.Normal.
- SelectionOption.Normal—Gives all the records that match the filter.
- SelectionOption.OnlyOne—Gives the first record matching the filter.
- SelectionOption.None—Gives an empty selection. This is intended to be used for combining multiple selections.
using (Geodatabase geodatabase = new Geodatabase(new DatabaseConnectionFile(new Uri("path\\to\\sde\\file\\sdefile.sde"))))
using (Table enterpriseTable = geodatabase.OpenDataset<Table>("LocalGovernment.GDB.piCIPCost"))
{
QueryFilter anotherQueryFilter = new QueryFilter { WhereClause = "FLOOR = 1 AND WING = 'E'" };
// Use SelectionOption.Normal to select all matching entries.
using (Selection anotherSelection = enterpriseTable.Select(anotherQueryFilter, SelectionType.ObjectID, SelectionOption.Normal))
{
}
// This can be used to get one record that matches the criteria. No assumptions can be made about which record satisfying the criteria is selected.
using (Selection onlyOneSelection = enterpriseTable.Select(anotherQueryFilter, SelectionType.ObjectID, SelectionOption.OnlyOne))
{
// This will always be one.
int singleRecordCount = onlyOneSelection.GetCount();
// This will always have one record.
IReadOnlyList<long> listWithOneValue = onlyOneSelection.GetObjectIDs();
}
// This can be used to obtain an empty selection, which can be used as a container to combine results from different selections.
using (Selection emptySelection = enterpriseTable.Select(anotherQueryFilter, SelectionType.ObjectID, SelectionOption.Empty))
{
}
}
Searching a Table or FeatureClass is necessary for reading and modifying the data in rows matching the query specified by ArcGIS.Core.Data.QueryFilter or ArcGIS.Core.Data.SpatialQueryFilter.
The result of the search is an ArcGIS.Core.Data.RowCursor, which has an interface similar to the System.Collections.IEnumerator interface (with the exception of the Reset method). RowCursor.MoveNext() returns a Boolean representing whether the last record in the result has been returned. If the result is empty, the first call to RowCursor.MoveNext() will return false. Typically, to access all the rows in RowCursor, MoveNext is used in conjunction with a while loop. If RowCursor.MoveNext() is true, RowCursor.Current gives the ArcGIS.Core.Data.Row reference. The underlying object is ArcGIS.Core.Data.Row or ArcGIS.Core.Data.Feature for a Table or a FeatureClass, respectively. If the object is really a Feature object, it can be cast to a Feature to access the Shape.
To Search a Table or FeatureClass, you can use the Search method, which has the following two parameters:
- QueryFilter—Specifies the filter for fetching the rows matching the filter. The default value is null, and a null value fetches all the rows in the table.
- Boolean for Recycling—Specifies whether successive rows obtained from the row cursor will use a new memory location for each row or use the same memory location as MoveNext is called. Recycling is explained in more detail in later sections. The default is true.
If you intend to modify a row, do not use
Subfields
in aQueryFilter
.
using (Geodatabase geodatabase = new Geodatabase(new DatabaseConnectionFile(new Uri("path\\to\\sde\\file\\sdefile.sde"))))
using (FeatureClass schoolBoundaryFeatureClass = geodatabase.OpenDataset<FeatureClass>("LocalGovernment.GDB.SchoolBoundary"))
{
// Using a spatial query filter to find all features that have a certain district name and lying within a given Polygon.
SpatialQueryFilter spatialQueryFilter = new SpatialQueryFilter
{
WhereClause = "DISTRCTNAME = 'Indian Prairie School District 204'",
FilterGeometry = new PolygonBuilder(new List<Coordinate2D>
{
new Coordinate2D(1021880, 1867396),
new Coordinate2D(1028223, 1870705),
new Coordinate2D(1031165, 1866844)
}).ToGeometry(),
SpatialRelationship = SpatialRelationship.Within
};
// Without Recycling
using (RowCursor indianPrairieCursor = schoolBoundaryFeatureClass.Search(spatialQueryFilter, false))
{
while (indianPrairieCursor.MoveNext())
{
using (Feature feature = (Feature)indianPrairieCursor.Current)
{
int nameFieldIndex = feature.FindField("NAME");
string districtName = Convert.ToString(feature["DISTRCTNAME"]);
double area = Convert.ToDouble(feature["SCHOOLAREA"]);
string name = Convert.ToString(feature[nameFieldIndex]);
Geometry geometry = feature.GetShape();
}
}
}
}
Note that if RowCursor.MoveNext()
returns true, a Row
(or Feature
) object is created. Even if this row (RowCursor.Current
) is not otherwise needed, it should be properly disposed through a call to System.IDisposable.Dispose
or with the use of a using statement.
The QueryFilter supports pagination. It aims to divide a large query result into smaller discrete query results using Offset
and Rowcount
properties. The Offset
specifies how many rows will be skipped from the result of the query.
The RowCount
specifies how many rows will returned by the query.
QueryFilter queryFilter = new QueryFilter()
{
ObjectIDs = objectIDs,
// Number of rows to return (return 100 rows)
RowCount = 100,
// Positional offset to skip number of rows (skip 20 rows)
Offset = 20,
PostfixClause = "ORDER BY OBJECTID"
};
To prevent arbitrary results in some datastores, the PostfixClause
property should be used in conjunction with the Offset
or RowCount
property.
ArcGIS.Core.Data.Feature inherits from ArcGIS.Core.Data.Row and adds the methods for getting and setting the shape on the feature.
On Row (and Feature), there is an integer-based indexer and a string-based indexer for accessing the attributes of the row. If the index of the field is known, the integer-based indexer is used, and the string-based indexer is used when the field name is known. The return value is always System.Object and must be cast into the corresponding type based on the field type inferred from the corresponding Field object.
To work with geometries, GetShape() and SetShape(Geometry) must be used. Something to keep in mind while modifying shapes is that since Geometry is an immutable object, when you set the Shape to a new geometry, the corresponding Builder needs to be used to build a new geometry (possibly using the existing geometry obtained from Feature.GetShape()).
Recycling can be used for better performance while accessing complex and large datasets and is typically used to render the shapes on the map or populate a table in the UI.
To enable recycling, the second parameter to the Search is set to true (which is the default parameter value). When using recycling, for successive Row references obtained while calling MoveNext on the RowCursor, all references obtained in previous iterations of MoveNext will point to the current Row the RowCursor.Current is referring to. This means that once MoveNext is called, the previous row is no longer available.
using (Geodatabase geodatabase = new Geodatabase(new DatabaseConnectionFile(new Uri("path\\to\\sde\\file\\sdefile.sde"))))
using (FeatureClass schoolBoundaryFeatureClass = geodatabase.OpenDataset<FeatureClass>("LocalGovernment.GDB.SchoolBoundary"))
{
QueryFilter queryFilter = new QueryFilter
{
WhereClause = "DISTRCTNAME = 'Indian Prairie School District 204'"
};
using (RowCursor indianPrairieCursor = schoolBoundaryFeatureClass.Search(queryFilter, true))
{
try
{
Row row1 = null;
if (indianPrairieCursor.MoveNext())
{
row1 = indianPrairieCursor.Current;
}
Row row2 = null;
if (indianPrairieCursor.MoveNext())
{
row2 = indianPrairieCursor.Current;
}
// If the code reaches this point, row1 and row2 are referencing the second row object obtained
// when the MoveNext was called the second time.
}
finally
{
if(row1 != null)
row1.Dispose();
if(row2 != null)
row2.Dispose();
}
}
}
While crafting queries, creating new fields, or creating new tables that have to be executed against multiple platforms (such as file geodatabases, SQL Server enterprise geodatabases, Oracle enterprise geodatabases, shapefiles, and so on), there are subtle differences to be taken into consideration.
Some of these considerations are as follows:
- The specific function name for the current flavor of database against which the query is executing
- The correct qualification for the table name or column name
- The owner name for the given fully-qualified table name
- Whether string comparison is case sensitive
- The timestamp, date, or time format accepted or returned
- Whether the table or field name can start with a certain character or contain a certain character
- Whether a given word is considered as a keyword for the database
- What SQL clauses or predicates can be used in the query
These considerations are necessary to be able to develop applications that can work with multiple data platforms.
The SQLSyntax class, obtained from the Datastore, provides the capability to answer these questions and write code that will be successful in its execution. See Using SQLSyntax to form platform agnostic queries for an example.
Joins are used to combine fields from two tables into a single table representation. The tables that participate in a Join can be from the same or different datastores. Joins are created from relationship classes. The relationship class can be from a geodatabase or be defined in memory as a virtual relationship class. The virtual relationship class (also used for Relate) is an important concept as it is by this means that a Join can join tables from different datastores. A few examples would be an enterprise feature class joined with a query layer or a feature class in an shapeFile with a feature class in a file geodatabase. The capability of Joins to be performed across datastores separates them from QueryDefs. Joins do not support one-to-many cardinality. Joins are read-only; however, they reflect data changes that are made to the base tables. Creating a Join requires two tables (or feature classes) and a relationship class.
To obtain a Join in ArcGIS.Core.Data API, the first step is to obtain the relationship class. This can be a RelationshipClass that is opened from the geodatabase, or you can create an in-memory RelationshipClass using the VirtualRelationshipClassDescription. To create a VirtualRelationshipClassDescription you need both the left and right tables. For example,
ArcGIS.Desktop.Mapping.Layer selectedLayer = MapView.Active.GetSelectedLayers()[0];
if (selectedLayer is ArcGIS.Desktop.Mapping.FeatureLayer)
{
using (Geodatabase geodatabase = new Geodatabase(new DatabaseConnectionFile(new Uri("path\\to\\sde\\file\\sdefile.sde"))))
using (FeatureClass leftFeatureClass = geodatabase.OpenDataset<FeatureClass>("LocalGovernment.GDB.SchoolBoundary"))
using (Table rightTable = (selectedLayer as FeatureLayer).GetTable())
{
FeatureClassDefinition leftFeatureClassDefinition = leftFeatureClass.GetDefinition();
IReadOnlyList<Field> listOfFields = leftFeatureClassDefinition.GetFields();
Field originPrimaryKey = listOfFields.FirstOrDefault(field => field.Name.Equals("primaryKeyField"));
Field destinationForeignKey = listOfFields.FirstOrDefault(field => field.Name.Equals("foreignKeyField"));
VirtualRelationshipClassDescription relationshipClassDescription = new VirtualRelationshipClassDescription(originPrimaryKey, destinationForeignKey, RelationshipCardinality.OneToMany);
RelationshipClass relationshipClass = leftFeatureClass.RelateTo(rightTable, relationshipClassDescription);
}
}
Once you have a relationship class, use the JoinDescription to specify the attributes of the Join such as the following:
- RelationshipClass: This is a mandatory argument for the join.
- JoinType: Whether the join is an inner join or a left outer join.
- JoinDirection: Whether the join goes from the left table to the right table (Forward) or right table to the left table (Backward).
- TargetFields: The fields from the left table that must be included. (Note that if the Join is Backward, the target fields are from the right table.)
- QueryFilter: To filter the contents of the Join based on a query.
- Selection: The selection to be used in creating the join.
JoinDescription joinDescription = new JoinDescription(relationshipClass)
{
JoinType = JoinType.LeftOuterJoin,
JoinDirection = JoinDirection.Backward
};
using (Join join = new Join(joinDescription))
using (Table joinedTable = join.GetJoinedTable())
{
// The joinedTable is an ArcGIS.Core.Data.Table reference that can be either an ArcGIS.Core.Data.Table or an
// ArcGIS.Core.Data.FeatureClass (depending upon whether the join has a Shape column).
}
Some things to keep in mind about Joins:
- Definitions are not supported on Joins since they are temporary in nature.
- Selections on the Table obtained from the join are based on distinct values in the unique identifier column. So, if you have a Join using a relationship class that is a one-to-many relationship class and the same row on the left table matches multiple rows on the right table, the selection will return only one of the matches, but the search will return all of them.
- Every attempt will be made to perform the join on the server side (executing the sql query, which would include the join to return the results) to improve performance. In some cases, this might not be possible (for example, when the left and right tables are from different datastores), and the join will be performed on the client side.
- The
JoinDescription.ErrorOnFailureToProcessJoinOnServerSide
property can be set to force the join to take place on the server. If the join cannot be executed on the server, an error is returned.
The geodatabase also provides the ability to create row cursors in a sorted order. This is accomplished through the Table.Sort()
method. Sort
takes a TableSortDescription object as an argument, which combines a QueryFilter and a set of SortDescriptions.
The SortDescription class allows sorting on a particular field. Properties on this object allow the caller to specify whether the sort is ascending or descending, and case-sensitive or case-insensitive.
The following snippet returns a sorts a World Cities table by country name, and then city name:
public RowCursor SortWorldCities(FeatureClass worldCitiesTable)
{
using (FeatureClassDefinition featureClassDefinition = worldCitiesTable.GetDefinition())
{
Field countryField = featureClassDefinition.GetFields().First(x => x.Name.Equals("COUNTRY_NAME", StringComparison.OrdinalIgnoreCase));
Field cityNameField = featureClassDefinition.GetFields().First(x => x.Name.Equals("CITY_NAME", StringComparison.OrdinalIgnoreCase));
// Create SortDescription for Country field
SortDescription countrySortDescription = new SortDescription(countryField);
countrySortDescription.CaseSensitivity = CaseSensitivity.Insensitive;
countrySortDescription.SortOrder = SortOrder.Ascending;
// Create SortDescription for City field
SortDescription citySortDescription = new SortDescription(cityNameField);
citySortDescription.CaseSensitivity = CaseSensitivity.Insensitive;
citySortDescription.SortOrder = SortOrder.Ascending;
// Create our TableSortDescription
TableSortDescription tableSortDescription = new TableSortDescription(new List<SortDescription>() { countrySortDescription, citySortDescription });
return worldCitiesTable.Sort(tableSortDescription);
}
}
The geodatabase provides powerful tools to calculate statistics on table fields. These statistics are computed on the server if supported. Statistics are calculated by calling the CalculateStatistics()
method on the Table class.
CalculateStatistics()
takes a TableStatisticsDescription object to specify how the statistics should be calculated.
The QueryFilter property allows the table rows to be filtered before statistics calculation takes place. The optional GroupBy property will group the rows together, and the optional OrderBy property will sort those groups. If the underlying datastore doesn't support Group By and Order By (e.g., shape files), these properties are ignored.
Finally, the StatisticsDescriptions property contains a set of statistics to calculate.
The StatisticsDescription class allows the caller to specify a set of different functions to calculate on a given field. The classes above can be use to calculate multiple statistics on multiple fields in a single call to Table.CalculateStatistics()
.
The return value is a set of TableStatisticsResult objects. If no GroupBy was specified, or if the underlying datastore doesn't support GroupBy, one TableStatisticsResult is returned. Otherwise, one TableStatisticsResult instance is returned for each field grouping.
The GroupBy property on the TableStatisticsResult object describes the grouping that was returned. The key-value pair represents the Field and value of that field for the group. One key-value pair is returned for each field in the original GroupBy.
The StatisticsResults list contains a list of the statistics that were calculated.
In the StatisticsResults object, only the appropriate properties are filled-in, based on which statistics were requested for each field.
For example, consider a Country table that contains three fields of interest— Region, Population_1990, and Population_2000. There are four regions, "North", "South", "East" and "West." For each of these regions we want to compute the sum and average of the Population_1990 and Population_2000 fields. To perform this calculation, assemble a TableStatisticsDescription as follows:
The Table.CalculateStatistics()
method would return a list of TableStatisticsResults objects like the following:
The following code sample matches the example above:
// Calculate the Sum and Average of the Population_1990 and Population_2000 fields, grouped and ordered by Region
public void CalculateStatistics(FeatureClass countryFeatureClass)
{
using (FeatureClassDefinition featureClassDefinition = countryFeatureClass.GetDefinition())
{
// Get fields
Field regionField = featureClassDefinition.GetFields().First(x => x.Name.Equals("Region"));
Field pop1990Field = featureClassDefinition.GetFields().First(x => x.Name.Equals("Population_1990"));
Field pop2000Field = featureClassDefinition.GetFields().First(x => x.Name.Equals("Population_2000"));
// Create StatisticsDescriptions
StatisticsDescription pop1990StatisticsDescription = new StatisticsDescription(pop1990Field, new List<StatisticsFunction>() { StatisticsFunction.Sum, StatisticsFunction.Average });
StatisticsDescription pop2000StatisticsDescription = new StatisticsDescription(pop2000Field, new List<StatisticsFunction>() { StatisticsFunction.Sum, StatisticsFunction.Average });
// Create TableStatisticsDescription
TableStatisticsDescription tableStatisticsDescription = new TableStatisticsDescription(new List<StatisticsDescription>() { pop1990StatisticsDescription, pop2000StatisticsDescription });
tableStatisticsDescription.GroupBy = new List<Field>() { regionField };
tableStatisticsDescription.OrderBy = new List<SortDescription>() { new SortDescription(regionField) };
// Calculate Statistics
IReadOnlyList<TableStatisticsResult> statisticsResults = countryFeatureClass.CalculateStatistics(tableStatisticsDescription);
// Code to process results goes here...
}
}
There are many different constructs for querying data such as searching a table or featureclass, QueryDefs, QueryTables, Query Layers, and Joins. The following shows the advantages of certain constructs in specific scenarios:
- A simple query involving a single registered table in a geodatabase: Table.Search or FeatureClass.Search is the best suited for this purpose.
- A one-off query with a join within a geodatabase to obtain the data: QueryDef is the best suited since it returns a row cursor with the data matching the query.
- Using the same query joining two or more tables within the geodatabase multiple times: QueryTable may be better suited, since there can be performance advantages of doing so, compared to executing the QueryDef evaluation multiple times.
- Adding the result of a query involving a single table within a geodatabase to the map or using the result in a geoprocessing tool: QueryTable is the best suited construct for this purpose. In addition, QueryTable would also work with versioned data (when using a descendant version) or archived data (when using a historical moment).
- Adding the result of a query joining two or more tables within a geodatabase to the map or using the result in a geoprocessing tool: In this case, either a QueryTable or a Join is suitable, since both contructs allow recursive joins (creating a join from a joined table or creating a querytable from a querytable), and they work with versioned and archived data.
- Executing a simple query or a join within a database (without a geodatabase schema): Query Layer is the best suited construct.
- Executing a join that spans across multiple datastores (such as a file geodatabase and a sql server database): Join is the only construct that allows joining data from multiple datastores.
Versioning allows multiple users to edit spatial and tabular data simultaneously in a long transactional environment. Users can directly modify the database without having to extract data or lock features in advance.
The following are the five primary abilities that are provided by the API:
- Lists all the versions in a geodatabase, including properties of the versions
- Connect to a specific version
- Create a new version
- Lists the differences between Tables and FeatureClasses from different versions
- Reconcile and post
The VersionManager object can be accessed from the Geodatabase object. VersionManager provides access to the currently connected version; a list of historical versions; and a list of all public versions, protected versions, and private versions owned by the currently connected user. VersionManager is only accessible if the IsVersioningSupported method on the Geodatabase object returns true
. The type of versioning in use by a particular geodatabase, branch or traditional, can be determined by calling VersionManager.GetVersioningType()
.
The VersionBase
class represents either a Version
or a HistoricalVersion
in a Geodatabase. The VersionManager.GetCurrentVersionBaseType()
returns whether the current geodatabase is connected to a Version
or a HistoricalVersion
.
The versions returned are ArcGIS.Core.Data.Version objects, which provide access to the following:
- Attributes such as name, access type, and description
- Parent version object
- List of child version objects
- Whether the current user is the owner of the version
To connect to a version represented by a Version object obtained from VersionManager, you can call the Connect method on the Version object, which returns the corresponding Geodatabase object so that datasets in that version of the geodatabase can be obtained.
Another method on the VersionManager
class, GetVersionNames()
provides a list of version names, rather than the versions themselves. If the goal is to display a list of applicable versions, this routine is fast and lightweight. The name can later be used to get the Version
by calling VersionManager.GetVersion(string)
.
To create a new version, a VersionDescription should be created with the appropriate properties. This version description is passed to CreateVersion on the VersionManager class. This creates a new version that is a child of Default. If using traditional (not branch) versioning, an overload allows versions to be created that are children of other versions.
To find differences, one workflow could be as follows:
- Get the Geodatabase object using one of the methods described in Geodatabase.
- Use VersionManager to find the other Version object of interest.
- Get the Geodatabase object for the other version using Connect().
- Open the corresponding Table/FeatureClasses from both Geodatabase objects.
- Get the DifferenceCursor using the Differences method.
The type of difference can be specified as a parameter to the Differences method using the ArcGIS.Core.Data.DifferenceType enum.
On feature service workspaces, only branch versioning with ArcGIS Enterprise 10.7 or higher is supported. The source table must be from a child branch and the target table must be from DEFAULT. The difference type DeleteNoChange
will return deletions made in the child version, Insert
will return inserts made in the child version, and UpdateNoChange
will return all updates in the child version. No other difference types are supported.
using (Geodatabase geodatabase = new Geodatabase(new DatabaseConnectionFile(new Uri("path\\to\\sde\\file\\sdefile.sde"))))
using (VersionManager versionManager = geodatabase.GetVersionManager())
{
IReadOnlyList<Version> versionList = versionManager.GetVersions();
IReadOnlyList<Version> childVersionList = null;
try
{
IEnumerable<Version> publicVersions = versionList.Where(version => version.GetAccessType() == VersionAccessType.Public);
// The default version will have a null Parent.
Version defaultVersion = versionList.First(version => version.GetParent() == null);
childVersionList = defaultVersion.GetChildren();
Version qaVersion = childVersionList.First(version => version.GetName().Contains("QA"));
using (Geodatabase qaVersionGeodatabase = qaVersion.Connect())
{
FeatureClass currentFeatureClass = geodatabase.OpenDataset<FeatureClass>("featureClassName");
FeatureClass qaFeatureClass = qaVersionGeodatabase.OpenDataset<FeatureClass>("featureClassName");
DifferenceCursor differenceCursor = currentFeatureClass.Differences(qaFeatureClass, DifferenceType.DeleteNoChange);
while (differenceCursor.MoveNext())
{
// The current row value refers to the row on the source Table/FeatureClass.
// Thus, for DeleteNoChange differences, the row will be null.
using (Row current = differenceCursor.Current)
{
}
long objectID = differenceCursor.ObjectID;
// Do something with the object Id.
}
}
}
finally
{
if (versionList != null)
{
foreach (Version version in versionList)
{
version.Dispose();
}
}
if (childVersionList != null)
{
foreach (Version version in childVersionList)
{
version.Dispose();
}
}
}
}
Reconciling and posting of versions are accomplished by two different methods Reconcile
and Post
from the Version
class.
The Post
method takes a PostOptions
object as an argument. The post options object specifies a list of selected features for partial post, service synchronization type (whether or not Post
should run synchronously), and the target version (use null to specify the default version).
The Reconcile
method takes a ReconcileOptions
object as an argument. The reconcile options object specifies the conflict resolution method (abort or continue if conflicts are found), conflict detection type (by row or by column), and the target version (use null to specify the default version).
To post a version immediately after the reconcile in the same edit session, the overload Reconcile(ReconcileOptions, PostOptions)
should be used.
Note: The traditional versioning doesn't support standalone post,
Version.Post(PostOptions)
, soVersion.Reconcile(ReconcileOptions, PostOptions)
must be used for posting. The branch versioning does support the standalone postVersion.Post(PostOptions)
butVersion.Reconcile()
must be called beforehand.
When using traditional versioning, you can also specify whether to resolve conflicts in favor of the target or edit version by setting the conflict resolution type.
The owner of a branch version can be changed by calling the Version.ChangeOwner
routine.
When you reconcile, the version you are editing gets updated with changes from the target version. It doesn't merge changes into the target version. After finishing the reconciliation and reviewing conflicts, you complete the merging process by posting your changes to the target version.
Partial posting allows developers to post a subset of changes made in a version. One sample use case is an electric utility that uses a version to design the facilities in a new housing subdivision. At some point in the process, one block of new houses have been completed, while the rest of the subdivision remains unbuilt. Partial posting allows the user to post the completed work, while leaving not yet constructed features in the version to be posted later.
The PostOptions.PartialPostSelections
property is used to specify which features should be posted. The property holds a list of ObjectID-based Selections. For updates and inserts, creating these selections is relatively straight-forward. For example, you may use a ConstructedStatus field on your feature classes to indicate that a feature is ready to be posted:
QueryFilter constructedFilter = new QueryFilter()
{
WhereClause = "ConstructedStatus = 'True'"
};
Selection constructedSupportStructures = supportStructureFeatureClass.Select(constructedFilter, SelectionType.ObjectID, SelectionOption.Normal);
Specifying which feature deletions you wish to post is slightly trickier, since you cannot issue a query to fetch a set of deleted features. Instead, you must manually build a list of ObjectIDs (Generating this list is up to the developer, of course, but one possible implementation idea would be to log feature deletes using an attribute rule). Once this list is constructed, you can build a Selection as follows:
Selection deletedSupportStructures = supportStructureFeatureClass.Select(null, SelectionType.ObjectID, SelectionOption.Empty);
deletedSupportStructures.Add(deletedSupportStructureObjectIDs);
In both cases, the partial post will fail if the selections contain features that have not been changed in the version. Table.Differences
can be used to verify changed features, as described in the section above.
Partial posting is supported with branch versioning on ArcGIS Enterprise 10.9 or later, and is not currently supported with parcel fabrics or topology.
The Utility Network supports partial posting since ArcGIS Enterprise 11.3 and Pro SDK 3.3.
During a reconcile operation, the changes made in the child version are compared with the target version where you desire to merge. If conflicts exist, the Version.GetConflicts()
method returns a list of Conflict
with the target version. The Conflict
class contains information about the differences between the two versions by a row's ObjectId including conflict types and before- and after-row values.
Currently, the
Version.GetConflicts()
method supports conflicts from a branch-versioned datastore.
Archiving provides the functionality to record and access changes made to all or a subset of data in a geodatabase. Archiving is the mechanism for capturing, managing, and analyzing data change over time.
Organizations need to preserve the changes made to their data in order to answer common questions such as the following:
- What was the value for a specific attribute at a certain moment?
- How has a particular feature or row changed through time?
- How has a spatial area evolved over time?
Archiving assists you in answering these types of questions by preserving data changes. It is important to understand that archiving maintains change on a dataset from the moment archiving is enabled until archiving is disabled.
Enabling and disabling archiving on datasets is accomplished through Python or geoprocessing tools. Two routines in the geodatabase API provide access to archiving.
-
Table.IsArchiveEnabled()
returns true if archiving is enabled on the table. - If so,
Table.GetArchiveTable()
can be used to return the actual archive table.
Although you can view individual archived tables as shown above, it is often easier to connect to a historical version using routines on the VersionManager
class. A historical version provides a read-only representation of what all archived datasets in that geodatabase looked like at a specific time. Historical versions can be accessed with a date and time, or with a named historical marker. A historical marker is a particular date and time that has been assigned a name. For example, a historical marker called “Twin Pines Mall safety incident”, may reference the date and time of 1:15 AM on October 26, 1985. Historical markers can be created using the CreateHistoricalVersion
method.
HistoricalVersion historicalVersion = versionManager.CreateHistoricalVersion(new HistoricalVersionDescription("Historical marker snapshot at the moment", DateTime.Now));
The timestamp must be unique for each historical version. The historical versions are cached with the timestamp as a key.
The GetCurrentHistoricalVersion
method returns the currently selected historical version, and the GetHistoricalVersion
obtain a historical version object from either a date and time or a historical marker name. To get a list of all of the historical markers on a geodatabase the GetHistoricalVersions
method can be used. Once you have obtained the HistoricalVersion
object, the Connect
method returns a geodatabase connection of that particular point in time.
using (Geodatabase geodatabase = new Geodatabase(new DatabaseConnectionFile(new Uri("path\\to\\sde\\file\\sdefile.sde"))))
using (VersionManager versionManager = geodatabase.GetVersionManager())
{
if (versionManager.GetCurrentVersionBaseType() == VersionBaseType.HistoricalVersion)
{
HistoricalVersion historicalVersion = versionManager.GetCurrentHistoricalVersion();
using (Geodatabase historicalGeodatabase = historicalVersion.Connect())
{
//
}
}
}
The following operations can be performed on RelationshipClass:
-
Create a relationship—To create a relationship, you need to pass in the references to the origin class row and the destination class row to the CreateRelationship method on the RelationshipClass object. Keep in mind that the Origin Class Primary Key value (and the Destination Class Primary Key value for attributed relationship classes) must be set for CreateRelationship to function as expected. See Creating a Relationship for an example. For attributed relationship classes, the attributes can be added by passing in a RowBuffer as the third argument (which is an optional parameter). The RowBuffer can be obtained by calling CreateRowBuffer on the AttributedRelationshipClass object.
-
Delete a relationship—Similar to creating a relationship, when deleting a relationship, the references to the origin row and the destination row must be passed into the DeleteRelationship method on RelationshipClass. See Deleting a Relationship for an example.
-
Find rows related by a given RelationshipClass—Given a list of objectids from the origin class or the destination class, GetRowsRelatedToOriginRows or GetRowsRelatedToDestinationRows, respectively, give the corresponding row objects that are related to the object ids passed in. See Getting Rows related by RelationshipClass for an example.
Attachments allow you to add files to individual features and can be images, PDFs, text documents, or any other type of file. For example, if you have a feature representing a building, you could use attachments to add multiple photographs of the building taken from several angles, along with PDF files containing the building's deed and tax information.
To add attachments, you first need to enable attachments on the feature class or table. When you enable attachments, a new table is created to contain the attachment files, and a new relationship class is created to relate the features to the attached files.
Since ArcGIS.Core.Data API is a DML only API, one option to enable attachments programmatically is to use the Geoprocessing API.
IReadOnlyList<string> parameters = Geoprocessing.MakeValueArray("path\\to\\sde\\connection.sde\\FeatureClassName");
IGPResult gpResult = await Geoprocessing.ExecuteToolAsync("management.EnableAttachments", parameters);
Determining whether a table or feature class has attachments enabled can be done using the IsAttachmentEnabled property on the Table and FeatureClass objects.
To add an attachment, a new attachment object can be created using the Attachment constructor. The Name, ContentType, and Data can be obtained and set using the methods on the new attachment. Once all the information has been set, the AddAttachment method on the Row can be called with the newly created Attachment object to add the attachment. An example of adding attachments can be found at Adding attachments.
To set the data on the attachment, a memory stream must be created from the file on disk. One way of accomplishing this is described in Obtaining a MemoryStream from a file.
To get attachments, the GetAttachments method on the Row can be used. If there are no arguments passed in, all the attachments for that row are returned. If there is a list of attachment IDs passed in as the first argument, the attachments matching the IDs are returned. The second argument is a Boolean, which is used to specify whether or not the data of the attachment should be returned. For an example of accessing attachments, see Updating attachments.
After getting the attachments from the row or feature, the getters and setters on the attachments can be used to set the Name, Content Type, and Data. Then, the updated attachment objects can be used to call UpdateAttachment on the Row or Feature. For an example of accessing attachments, see Updating attachments.
To delete an attachment, you need to get the attachment from the row first, and use DeleteAttachments on the Row or Feature by passing in a list of attachment IDs to be deleted. The return value is a dictionary of attachment IDs to the exception thrown when the deletion of the corresponding attachment failed. If the dictionary is empty, the requested attachments were deleted successfully. If no arguments are passed to DeleteAttachments, all the attachments for that row are deleted. For an example of deleting attachments, see Deleting attachments.
Contingent values are a data design feature that allows the values of one field to depend on the values of another field. A field with a domain forces an editor to select from a valid list of values, but with contingent values, you can go a step further to limit the list of values based on other fields. Several fields are linked together in a field group and combinations of contingent values are defined within the field group. When editing, instead of seeing the entire list of domain values, you only see the contingent values. Thus, contingent values help enforce data integrity by applying additional constraints to reduce the number of valid inputs.
Contingent values can be accessed from the Contingency
class using the GetContingentValues()
method, which returns a dictionary of ContingentValue
by the FieldGroup
name.
The ContingentValue
is a base class that defines a possible value for a field participating in a field group.
A contingent value can be one of the following:
-
ContingentCodedValue
— A value from a coded value domain. -
ContingentRangeValue
— Defines maximum and minimum range domain values. -
ContingentAnyValue
— Indicates that any value in the field's domain is considered valid (as well asnull
). -
ContingentNullValue
— Indicates that anull
value is valid for the field.
A FieldGroup
includes the names of the fields that support contingencies and their restrictions.
The GetContingencies()
method in the TableDefinition
or FeatureClassDefinition
class returns a list of Contingency
applied in the dataset.
using (TableDefinition tableDefinition = table.GetDefinition())
{
IReadOnlyList<Contingency> contingencies = tableDefinition.GetContingencies();
foreach (Contingency contingency in contingencies)
{
// Field group
FieldGroup filedGroup = contingency.FieldGroup;
string fieldGroupName = filedGroup.Name;
IReadOnlyList<string> fieldInFieldGroup = filedGroup.FieldNames;
bool isEditRestriction = filedGroup.IsRestrictive;
int contingencyId = contingency.ID;
Subtype subtype = contingency.Subtype;
bool isContingencyRetired = contingency.IsRetired;
// Contingent values
IReadOnlyDictionary<string, ContingentValue> contingentValuesByFieldName = contingency.GetContingentValues();
foreach (KeyValuePair<string, ContingentValue> contingentValueKeyValuePair in contingentValuesByFieldName)
{
string attributeFieldName = contingentValueKeyValuePair.Key;
// Contingent value type associated with the attribute field
ContingentValue contingentValue = contingentValueKeyValuePair.Value;
switch (contingentValue)
{
case ContingentCodedValue contingentCodedValue:
string codedValueDomainName = contingentCodedValue.Name;
object codedValueDomainValue = contingentCodedValue.CodedValue;
break;
case ContingentRangeValue contingentRangeValue:
object rangeDomainMaxValue = contingentRangeValue.Max;
object rangeDomainMinValue = contingentRangeValue.Min;
break;
case ContingentAnyValue contingentAnyValue:
// Any value type
break;
case ContingentNullValue contingentNullValue:
// Null value
break;
}
}
}
}
Additionally, the Table.GetContingentValues
method retrieves a list of potential contingent values for a given RowBuffer
and attribute field.
using (RowBuffer rowBuffer = parcels.CreateRowBuffer())
{
IReadOnlyDictionary<FieldGroup, IReadOnlyList<ContingentValue>> possibleZonings = parcels.GetContingentValues(rowBuffer, "Zone");
IEnumerable<FieldGroup> possibleFieldGroups = possibleZonings.Keys;
foreach (FieldGroup possibleFieldGroup in possibleFieldGroups)
{
IReadOnlyList<ContingentValue> possibleZoningValues = possibleZonings[possibleFieldGroup];
}
}
The Table.ValidateContingencies
method validates a row against contingent values and returns the ContingencyValidationResult
, which provides information about matched (valid) and violated (invalid) contingency constraints.
using (RowBuffer rowBuffer = parcels.CreateRowBuffer())
{
// Insert values in a row buffer
rowBuffer["Zone"] = "Business";
rowBuffer["TaxCode"] = "TaxB";
// Validate contingency values of the parcels' row
ContingencyValidationResult contingencyValidationResult = parcels.ValidateContingencies(rowBuffer);
// Valid contingencies
IReadOnlyList<Contingency> matchedContingencies = contingencyValidationResult.Matches;
if (matchedContingencies.Count > 0)
{
// Create a row with valid contingency values
parcels.CreateRow(rowBuffer);
}
// Invalid contingencies
IReadOnlyList<ContingencyViolation> violatedContingencies = contingencyValidationResult.Violations;
foreach (ContingencyViolation contingencyViolation in violatedContingencies)
{
ContingencyViolationType violationType = contingencyViolation.Type;
Contingency violatedContingency = contingencyViolation.Contingency;
}
}
A
Contingency
can be retired. When it retires, it should remain in existing data but not be available for editing.
Attribute rules enhance the editing experience and improve data integrity for geodatabase datasets. They are user-defined rules that can be used to automatically populate attributes, restrict invalid edits during edit operations, and perform quality assurance checks on existing features. Geodatabases support calculation rules to automatically populate attributes, constraint rules to ensure quality data, and validation rules to review existing features.
To determine whether a geodatabase supports attribute rules, use the IsAttributeRuleSupported()
method on the Geodatabase
class.
Attribute rules are created as a property of feature classes or tables in the geodatabase. As such, the attribute rules for a dataset can be obtained using the GetAttributeRules method on a TableDefinition
. This returns a read only list of AttributeRuleDefinition objects. You can choose to obtain all the attribute rules for a dataset, or you can retrieve the attribute rules of a particular rule type (calculation, constraint or validation). Once you have the attribute rules you can interrogate their properties.
Attribute rules have a name and description obtained using the GetName and GetDescription methods. Use GetAttributeRuleType to determine the AttributeRuleType. The details of the rule are defined with the script expression which can be retrieved via GetScriptExpression and you can determine which events trigger the rule by examining the GetTriggeringEvents. Depending upon when the rule was authored it may be useful to determine the minimum Arcade version required to execute or evaluate the rule. Use GetMinimumArcadeVersion to obtain this information.
An error number, error message and severity can be defined for an attribute rule when it is created. The GetErrorNumber, GetErrorMessage and GetSeverity methods can be used to retrieve these. Status properties regarding evaluation can be retrieved using GetIsBatch, GetIsEnabled or GetExcludeFromClientEvaluation.
The Pro SDK also provides routines to execute validation rules and batch calculation rules. The classes for executing these attribute rules are shown in the diagram below. More details are included in the document sections that follow.
The central class for dealing with attribute rules in the Pro SDK is the AttributeRuleManager
class. This object can be obtained by calling the GetAttributeRuleManager()
method on the Geodatabase
class. The IsAttributeRuleSupported()
method described above should be first checked to ensure that the geodatabase supports attribute rules.
There are several different types of attribute rules. Immediate calculation and constraint rules are supported on multiple geodatabase types, and fire automatically when creating, updating, or deleting rows.
Some types of geodatabases support batch calculation and validation rules. These rules are executed by calling the EvaluateInEditOperation()
extension method on the AttributeRuleManager
class. To determine if a database supports evaluation, the IsEvaluationSupported()
method should be called before proceeding.
The EvaluateInEditOperation()
method is used to execute attribute rules. The parameters passed in to a call to EvaluateInEditOperation()
are specified by the AttributeRuleEvaluationDescription
class. Supported combinations are enforced through the use of a limited number of constructors.
Property | Description |
---|---|
AttributeRuleType | The AttributeRuleType parameter can be used to run batch calculation rules, validation rules, or both. |
Extent | The extent parameter is used to control which features are evaluated. If missing, the entire extent is evaluated. |
Selections | A list of one of more selections, specifying the features to evaluate. The default value is an empty list. |
VersionEvaluationScope | The VersionEvaluationScope parameter specifies whether to evaluate all of the rows in the version, or only those rows that have been changed within that version. |
The EvaluateInEditOperation()
method returns an AttributeRuleEvaluationResult
object, which is used to determine the number of errors found.
When called on a feature service, the EvaluateInEditOperation()
method makes a call to a REST endpoint, which runs synchronously. This routine has low processing overhead. Unfortunately, it is also susceptible to server timeouts if processing takes too long. To work around this, the Evaluate REST endpoint provides an asynchronous version of this routine. Processing is offloaded from the Validation Server to the Geoprocessing Server. While this does cause a performance hit, the Geoprocessing Server is designed to handle long-running jobs and is not susceptible to timeouts. With this service, ArcGIS Pro will poll the service until the operation completes. Because many evaluation calls run against a large number of features, the asynchronous version is the default.
The ServiceSynchronizationType
parameter on AttributeRuleEvaluationDescription
allows control over which version of the service is called. This uses the ServiceSynchronizationType
enum, which has two values: Synchronous
and Asynchronous
. It's important to note that from a C# developer's perspective, this routine is still a synchronous C# method, regardless of this parameter. Like other methods in the API, it blocks the current thread until the operation completes.
If writing a CoreHost application, use the
Evaluate()
base method on theAttributeRuleManager
class. This routine provides its own transaction management; therefore, it cannot be issued inside an edit operation created byGeodatabase.ApplyEdits()
.
If attribute rule validation fails, errors are written into a set of error tables. Likewise, if a problem occurs with running a batch calculation rule, an error is also written to these tables. There are four error tables that are used, based on the geometry of the source feature class or table. These error tables are visible in the Pro user interface, and can also be accessed in the Pro SDK with the GetErrorTable()
method on the AttributeRuleManager
. Once obtained, these tables can be read and queried like any other geodatabase table. Each row in these tables represents an error.
The CreateAttributeRuleError()
routine takes a row from one of the error tables and uses it to generate an AttributeRuleError
object. This object provides a more user-friendly view of the table. One key piece of functionality is that this object can be used to fetch the features that caused the error to be created. The OriginClass
property returns the table or feature class name, which can be used to open the table using Geodatabase.OpenDataset()
. The OriginGlobalID
or OriginObjectID
properties on the AttributeRuleError
class can be used to fetch the specific row that caused the error.
The attribute rule error tables are, for the most part, read-only. New rows can only be created by running EvaluateInEditOperation()
on features that fail validation. Rows can only be deleting by fixing the underlying problem that caused the error feature and re-running EvaluateInEditOperation()
. The only update that can be made to these tables is to mark an error as an exception. This is used to indicate to the system that the error represents an exceptional case, and shouldn’t be treated as an error. Errors can be marked as exceptions by setting the IsException
property on the AttributeRuleError
class and then passing a list of these to the UpdateErrorsInEditOperation()
extension method on the AttributeRuleManager
.
If writing a CoreHost application, use the
UpdateErrors()
base method on theAttributeRuleManager
class. This routine provides its own transaction management; therefore, it cannot be issued inside an edit operation created byGeodatabase.ApplyEdits()
.
The ArcGIS.Core.Data API provides support for performing multiple edits as a single transaction (also known as long transactions). The following are the two modes under which editing in a transaction can be accomplished:
- Editing in addin mode
- Editing in standalone mode
This is the normal mode of performing edits in a transaction while authoring add-ins for ArcGIS Pro. The EditOperation class provides the support for editing in a transaction in ArcGIS Pro. For more details, see Advanced Editing in the ArcGIS Pro ProConcepts Editing documentation.
When the ArcGIS.Core API is used outside ArcGIS Pro (for example, in a console application) using the ArcGIS.CoreHost API support, it is referred to as stand-alone mode. In this mode, when performing edits in a transaction, the ArcGIS.Core.Data.Geodatabase.ApplyEdits method should be used.
geodatabase.ApplyEdits(() =>
{
using (RowBuffer rowBuffer = citiesTable.CreateRowBuffer())
{
rowBuffer["State"] = "Colorado";
rowBuffer["Name"] = "Fort Collins";
rowBuffer["Population"] = 167830;
citiesTable.CreateRow(rowBuffer).Dispose();
rowBuffer["Name"] = "Denver";
rowBuffer["Population"] = 727211;
citiesTable.CreateRow(rowBuffer).Dispose();
// etc.
}
});
As shown in the above example, the ApplyEdits
method takes an Action delegate, which encapsulates all the edits that must be performed in a transaction. The transaction is commited only if there are no exceptions thrown during the execution of the Action delegate. If there is any unhandled exception thrown during the execution, all the edits made up to that point are rolled back, and the transaction is aborted.
Enterprise geodatabases automatically default to using versioned edit sessions. If you wish to edit non-versioned data, you should use the ApplyEdits
overload that takes an additional Boolean parameter to indicate that a non-versioned edit sessions should be used.
// The non-versioned edit of States feature class that participates in a composite relationship
geodatabase.ApplyEdits(() =>
{
using (Table table = geodatabase.OpenDataset<FeatureClass>("States"))
{
using (RowCursor cursor = table.Search(queryFilter, false))
{
if (cursor.MoveNext())
{
using (Feature feature = (Feature)cursor.Current)
{
feature["Name"] = "CA";
feature.Store();
}
}
}
}
}, false);
Mixing edits against both versioned and unversioned data is a single edit session is not recommended.
The following are the differences between how EditOperation.ExecuteAsync and Geodatabase.ApplyEdits work:
- The ApplyEdits method can only be used to make edits within a single geodatabase, while the ExecuteAsync on the EditOperation allows interleaving edits to multiple geodatabases.
- Unlike ExecuteAsync, ApplyEdits does not support Undo/Redo functionality.
- While ExecuteAsync requires a SaveEdits or DiscardEdits call to close the edit session, ApplyEdits implicitly closes both the edit session and edit operation.
- ApplyEdits does not support updating the map.
It should be noted that ApplyEdits is intended for editing outside ArcGIS Pro. For all editing in an add-in, EditOperation should be used.
If a stand-alone CoreHost application is inserting a large number of rows, use the InsertCursor class. This is guaranteed to perform at least as good as the technique shown above. Depending on the kind of feature class or table being used, it has the potential to be many times faster. Some examples of tables that do not perform faster are annotation and dimensioning feature classes, tables that partipate in composite relationships, and tables with attachments.
geodatabase.ApplyEdits(() =>
{
using (InsertCursor insertCursor = citiesTable.CreateInsertCursor())
using (RowBuffer rowBuffer = citiesTable.CreateRowBuffer())
{
rowBuffer["State"] = "Colorado";
rowBuffer["Name"] = "Fort Collins";
rowBuffer["Population"] = 167830;
insertCursor.Insert(rowBuffer);
rowBuffer["Name"] = "Denver";
rowBuffer["Population"] = 727211;
insertCursor.Insert(rowBuffer);
// Insert more rows here
// A more realistic example would be reading source data from a file
insertCursor.Flush();
}
});
The UpdateCursor
class is designed to update rows fast. It is guaranteed to deliver performance at least as good as making repeated calls to Table.CreateRow()
and Row.Store()
methods. In many cases, its performance may be significantly better.
using (UpdateCursor updateCursor = citiesTable.CreateUpdateCursor(queryFilter, false))
{
if (updateCursor.MoveNext())
{
using (Row row = updateCursor.Current)
{
int stateIndex = row.FindField("State");
row[stateIndex] = "Colorado";
updateCursor.Update(row);
}
}
}
Update cursors should be used within an edit operation callback when used in an ArcGIS Pro add-in. For stand-alone CoreHost applications, update cursor usage should be enclosed within a call to Geodatabase.ApplyEdits
.
If a database supports a native programming language such as SQL, statements may be executed using the static ExecuteStatement
method on the DatabaseClient
class. The statement can be any DDL (data definition language) or DML (data manipulation language) statement but cannot return any result sets. The syntax for the SQL is as required by the underlying database.
As explained previously, all methods in the ArcGIS.Core.Data API must be invoked on the Main CIM Thread (MCT). However, some exclusions exist. ArcGIS.Core.Data.SpatialQueryFilter and ArcGIS.Core.Data.QueryFilter can be constructed on any thread; however, Search and Select using the QueryFilter still have to be called on the MCT.
When in doubt, refer to the triple-slash document to determine whether the method can be invoked on any thread.
If you have previously developed code in ArcObjects that depends on the unique instancing characteristics of the Geodatabase API, you can use the Handle property on the ArcGIS.Core.Data API to implement the equivalent logic in the Pro SDK. The Handle can be used for efficient use as a dictionary key and for equality checks equivalent to comparing the pointers in ArcObjects.
As previously mentioned in the opening Architecture section, remember to explicitly Dispose unmanaged resources. For example, avoid code like the following:
if (row.GetTable().Handle == _handle)
{
//etc.
}
when evaluating an object’s handle . This code will instantiate a managed Core.Data table instance whose reference to unmanaged resources will not be released until it is garbage collected. This could also have additional negative consequences if the code were being called in a loop that was creating millions of table instances that were not being disposed and had to be garbage collected. Instead, an explicit reference to the table should be acquired each time and disposed after its Handle property has been evaluated.
using (Table table = row.GetTable())
{
if (table.Handle == _handle)
{
// etc.
}
}
Although the Row
and Feature
classes have a Handle
property, these values can not be used for equivalency checks. This is because different Row
objects referring to the same underlying database row can be created using different RowCursor
objects, and a recycling RowCursor
object will use the same Row
to refer to multiple underlying database rows.
There are times when you will wish to use the Geoprocessing API with geodatabase objects. For example, if you wish to perform a DDL (data definition language) operation with a datastore or dataset that is not supported by the DDL API, you will need to use Geoprocessing. The Datastore.GetPath()
and Dataset.GetPath()
routines can be used to provide a path to these objects.