ProConcepts Scene Layers - kataya/arcgis-pro-sdk GitHub Wiki
Scene layers are a special type of service layer that are primarily designed for use with 3D content. The various different types of scene layers, their purpose, and usage relevant to the API are discussed.
Language: C#
Subject: Scene Layers
Contributor: ArcGIS Pro SDK Team <[email protected]>
Organization: Esri, http://www.esri.com
Date: 11/24/2020
ArcGIS Pro: 2.7
Visual Studio: 2017, 2019
In April 2015, ESRI released the indexed 3D scene layers, I3S, specification under the Creative Commons licensing as an open specification which has been adopted by the OGC Indexed 3D Scene Layers. The goal for I3S was to create a mechanism for cross platform distribution of 3D content for visualization and analysis that satisfies field, web, and enterprise use cases. 3D content, by its very nature is large, heterogeneous, and distributed. The I3S specification was, therefore designed to provide rapid streaming and access to large caches of heterogeneous and distributed 3D content. Within the I3S, an individual I3S data set is referred to as a Scene Layer. a Scene Layer provides for access to geometry, attributes, and optional textures for display and analysis across the ArcGIS platform.
I3S allows for the definition of multiple layer types and can be used to represent different categories of data. Currently, the I3S specification describes scene layer types for 3D objects, points, point clouds, building scene layer, and integrated meshes. From the I3S specification:
The following layer types have been specified and the standard validated via implementation and production deployments:
- 3D Objects (e.g. building exteriors, from GIS data as well as 3D models in various formats)
- Integrated Mesh (e.g. an integrated surface representing the skin of the earth, from satellite, aerial or drone imagery via dense matching photogrammetric software)
- Point (e.g. hospitals or schools, trees, street furniture, signs, from GIS data)
- Point cloud (e.g. large point data from LiDAR)
- Building Scene Layer (e.g. comprehensive building model including building components)
The complete IS3 format specification can be found here
Currently, within the public API, there are six available scene layer model objects that implement the underlying IS3 scene layer types and their associated profile(s). Note that the IS3 Building Scene Layer format implementation requires both a BuildingSceneLayer
and a BuildingDisciplineSceneLayer
model object (or "class") and that the I3S Point and 3D Object layer type are both implemented by the FeatureSceneLayer
model.
Scene layer model | I3S layer type | Source content |
---|---|---|
BuildingSceneLayer | Building Scene Layer | Building information - Autodesk revit data |
BuildingDisciplineSceneLayer | Building Scene Layer | BIM - Individual Revit construction disciplines |
FeatureSceneLayer | Point | Point geometries w/ attributes |
--- | 3D Object | Multipatch geometries w/ attributes |
IntegratedMeshSceneLayer | Integrated Mesh | Integrated mesh is often derived from imagery using photogrammetric software |
PointCloudSceneLayer | Point cloud | Lidar or LAS datasets |
Given the broad and diverse nature of the IS3 specification and its support for a variety of heterogeneous 3D data types, the Pro scene layer model objects derive from a variety of different base classes in the API. However, they are all layers even though their individual characteristics and capabilities vary greatly depending on the underlying I3S layer type upon which they are based. Given the heterogeneous nature of scene layers and the single inheritance hierarchy model of .NET, the Pro API uses an interface ISceneLayerInfo
(ISceneLayerInfo) rather than a common base class to identify the methods and properties common across all scene layer model types.
ISceneLayerInfo.SceneServiceLayerType
returns an enum that can be used to identify the scene layer model type (without having to cast back to the underlying scene layer class) and the ISceneLayerInfo.GetDataSourceType()
returns another enum that identifies whether the scene layer is sourced from an I3S service or direct sourced from a local slpk. This can have ramifications for the capabilities available to the various models. For example, editing, search and selection via the API are not available to FeatureSceneLayers of SceneLayerDataSourceType.SLPK
. Consult the detailed sections that follow covering each of the scene layer model types for more information.
I3S scene services can be created by publishing* a layer containing the relevant 3D data in one of three ways:
- The data is copied to the server ("by value")
- The data is registered with the appropriate (federated) server ("by ref")
- The data is shared via a scene layer package (.slpk). To share an slpk, add it as a content item to portal or online. Slpks can also be directly consumed from local storage.
Scene layers support both geographic and projected coordinate systems. Scene layers can also include multiple levels of detail (LoD) that apply to the whole layer and can be used to generalize or thin the layer at different scales.
*The minimum platform requirements for publishing web scene layers can be found at introduction-to-sharing-web-layers. The minimum platform requirements for publishing .slpks can be found at scene-layer-package
Layer creation for scene layers follows the same pattern(s) used for other service layers. Namely, specify the Uri to the scene service or local .slpk file as the Uri parameter to LayerFactory. If, in your code, you have reference to an item in Catalog that likewise points to a scene service or .slpk then use the item in place of the Uri. In the following example, a Uri to a point cloud is passed to LayerFactory CreateLayer<T>
. The layer will be created with its visibility set to false.
//Use a service url
var pointcloud_url =
@"https://server_url/server/rest/services/Hosted/Point_Cloud_Service/SceneServer";
//or use an item Url - note: the portal specified ~must be the active portal~ or you
//will get an exception
//var pointcloud_url =
// @"https://my_portal/home/item.html?id=0123456789abcde123456789abcdef0";
//Use an slpk url
var fsc_multipatch_slpk = @"C:\Data\SceneLayers\Local_Buildings.slpk";
//Feature scene service url
var fsc_multipatch_url =
@"https://server_url/server/rest/services/Hosted/Multipatch_Buildings/SceneServer";
if (MapView.Active.Map.MapType != MapType.Scene)
return;//we add layers to scenes only
var urls = new List<string>() {pointcloud_url,fsc_multipatch_slpk,fsc_multipatch_url};
var info = await QueuedTask.Run(() => {
var str = "";
foreach(var url in urls) {
var createparams = new LayerCreationParams(new Uri(url, UriKind.Absolute)) {
IsVisible = false
};
var sceneLayer = LayerFactory.Instance.CreateLayer<Layer>(
createparams, MapView.Active.Map);
//or use... LayerFactory.Instance.CreateLayer(
new Uri(url, UriKind.Absolute), MapView.Active.Map);
str += GetSceneLayerInfo(sceneLayer);
}
return str;
});
System.Diagnostics.Debug.WriteLine(info);
System.Windows.MessageBox.Show(info);
...
///<summary>Format general information about scene layers</summary>
private string GetSceneLayerInfo(Layer layer, int level = 0) {
string indent = " ".PadRight(level + 1);
StringBuilder sb = new StringBuilder();
sb.AppendLine($"{indent}Name: {layer.Name}");
sb.AppendLine($"{indent}Layer Type: {layer.GetType().ToString()}");
//CIM Definition type
var def = layer.GetDefinition();
sb.AppendLine($"{indent}CIM Definition: {def.GetType().ToString()}");
var sceneInfo = layer as ISceneLayerInfo;
sb.AppendLine(
$"{indent}SceneServiceLayerType: {sceneInfo.SceneServiceLayerType.ToString()}");
sb.AppendLine(
$"{indent}SceneLayerDataSourceType: {sceneInfo.GetDataSourceType().ToString()}");
if (layer is FeatureSceneLayer) {
var fsc = layer as FeatureSceneLayer;
var hasFeatureService = fsc.HasAssociatedFeatureService.ToString();
sb.AppendLine($"{indent}HasAssociatedFeatureService: {hasFeatureService}");
sb.AppendLine($"{indent}ShapeType: {fsc.ShapeType.ToString()}");
}
else if (layer is BuildingSceneLayer || layer is BuildingDisciplineSceneLayer) {
sb.AppendLine($"{indent}Child count: {((CompositeLayer)layer).Layers.Count()}");
foreach (var child in ((CompositeLayer)layer).Layers) {
sb.AppendLine(GetSceneLayerInfo(child, level + 1));
}
}
sb.AppendLine("");
return sb.ToString();
}
Assuming similar service types in practice, output will be similar to:
Name: Point_Cloud_Service
Layer Type: ArcGIS.Desktop.Mapping.PointCloudSceneLayer
CIM Definition: ArcGIS.Core.CIM.CIMPointCloudLayer
SceneServiceLayerType: PointCloud
SceneLayerDataSourceType: Service
Name: Local_Buildings
Layer Type: ArcGIS.Desktop.Mapping.FeatureSceneLayer
CIM Definition: ArcGIS.Core.CIM.CIMSceneServiceLayer
SceneServiceLayerType: Object3D
SceneLayerDataSourceType: SLPK
HasAssociatedFeatureService: False
ShapeType: esriGeometryMultiPatch
Name: Multipatch_Buildings
Layer Type: ArcGIS.Desktop.Mapping.FeatureSceneLayer
CIM Definition: ArcGIS.Core.CIM.CIMSceneServiceLayer
SceneServiceLayerType: Object3D
SceneLayerDataSourceType: Service
HasAssociatedFeatureService: True
ShapeType: esriGeometryMultiPatch
In the next case, a Uri to a Building scene service is passed to LayerFactory:
//Building scene url but could also use .slpk
var bldg_scene_url =
@"https://server_url/server/rest/services/Hosted/Full_Building_Combined/SceneServer";
var info = await QueuedTask.Run(() => {
var sceneLayer = LayerFactory.Instance.CreateLayer(
new Uri(url, UriKind.Absolute), MapView.Active.Map);
return GetSceneLayerInfo(sceneLayer);
}
//Display the "info" string...
...
Assuming a similar Building scene service in practice, output will be similar to:
Name: Full_Building_Combined
Layer Type: ArcGIS.Desktop.Mapping.BuildingSceneLayer
CIM Definition: ArcGIS.Core.CIM.CIMBuildingSceneLayer
SceneServiceLayerType: Building
SceneLayerDataSourceType: Service
Child count: 2
Name: Overview
Layer Type: ArcGIS.Desktop.Mapping.FeatureSceneLayer
CIM Definition: ArcGIS.Core.CIM.CIMSceneServiceLayer
SceneServiceLayerType: Object3D
SceneLayerDataSourceType: Service
HasAssociatedFeatureService: False
ShapeType: esriGeometryMultiPatch
Name: Full Model
Layer Type: ArcGIS.Desktop.Mapping.BuildingDisciplineSceneLayer
CIM Definition: ArcGIS.Core.CIM.CIMBuildingDisciplineSceneLayer
SceneServiceLayerType: BuildingDiscipline
SceneLayerDataSourceType: Unknown
Child count: 2
Name: Architectural
Layer Type: ArcGIS.Desktop.Mapping.BuildingDisciplineSceneLayer
...
Child count: 8
Name: CurtainWallMullions
Layer Type: ArcGIS.Desktop.Mapping.FeatureSceneLayer
...
Name: CurtainWallPanels
Layer Type: ArcGIS.Desktop.Mapping.FeatureSceneLayer
...
Name: Structural
Layer Type: ArcGIS.Desktop.Mapping.BuildingDisciplineSceneLayer
...
BuildingSceneLayers
represent the 3D model aspect of Building Information Model (BIM) and are derived from BIM data extracted from Autodesk revit data files. BIM processes typically produce 3D virtual representations of real-world assets that are commonly used in building design, construction, documentation, and evaluation. Autodesk Revit software is commonly used for BIM modelling and provides different disciplines for architectural design, MEP (mechanical, electrical, plumbing), structural engineering, and construction processes.
The BuildingSceneLayer can be considered as the entire building (as modeled in the source revit file) whereas the BuildingDisciplineSceneLayer
are sublayers (of the BuildingScenelayer) representing each of the individual disciplines (architectural, structural, MEP, etc) that are contained within the building. The layer-sublayer hierarchy of a typical BuildingSceneLayer is shown below:
Notice the two sublayers "Overview" and "Full Model" contained within the BuildingSceneLayer. For display performance optimization, the BuildingSceneLayer contains a FeatureSceneLayer
sublayer called "Overview" which renders the building exterior. The individual disciplines that provide the details of the building interior are contained within a BuildingDisciplineSceneLayer sublayer called "Full Model". When the "Full Model" layer is expanded in the TOC, it reveals child BuildingDisciplineSceneLayers for each of the disciplines that were extracted from the source revit data file. The display of the Full Model disciplines can be further refined via the use of filters. Filters are the topic of the next section.
Building scene layers allow you to isolate different aspects of its category layers through the concept of filters. A filter queries the underlying information to filter out all but the specified characteristics of the interior design (e.g. particular floors, rooms, etc structures across all category layers). Filter queries can use any field as filter type that is defined for all category layers.
Using filters allows different aspects of a building's composition, properties, or location of structures to be displayed in a Pro scene view. A BuildingSceneLayer can have many filters defined but only one filter is active and can be accessed via BuildingSceneLayer.GetActiveFilter()
. Each filter consists of up to two filter definition blocks. A block can either be a solid filter block or a wire frame filter block. The type of filter block dictates whether the features selected will be drawn as "solids" or as wire frames. The filter block type can be accessed via the FilterBlockDefinition.FilterBlockMode
property which returns an enum of type Object3DRenderingMode.None
for solid or Object3DRenderingMode.Wireframe
for mesh. The active filter can be set by constructing a filter definition with the desired blocks, adding it to the collection of filters for the given BuildingSceneLayer
and applying it via BuildingSceneLayer.SetActiveFilter(filterID)
. This is illustrated in the following example:
//Retrieve the available floors and categories from the filter
//Categories are defined by the i3s spec as building sublayers
//https://github.com/Esri/i3s-spec/blob/master/docs/1.6/subLayerModelName.md
//Get the types on the layer
//var bsl = .... ;//However we get the building scene layer
await QueuedTask.Run(() => {
//Retrieve the complete set of types and values for the building scene
var dict = bsl.QueryAvailableFieldsAndValues();
//available floors
var floors = dict.SingleOrDefault(kvp => kvp.Key == "BldgLevel").Value;
//available building entities or "categories"
var categories = dict.SingleOrDefault(kvp => kvp.Key == "Category").Value;
//we will use the first floor listed which may or may not be the bottom floor
var filtervals = new Dictionary<string, List<string>>();
filtervals.Add("BldgLevel", new List<string>() { floors[0] });
// Filter with stairs and walls also, if present
var category_vals = categories.Where(v => v == "Walls" || v == "Stairs").ToList() ??
new List<string>();
if (category_vals.Count() > 0) {
filtervals.Add("Category", category_vals);
}
//Create a solid block (other option is "Wireframe")
var fdef = new FilterBlockDefinition() {
FilterBlockMode = Object3DRenderingMode.None,
Title = "Solid Filter",
SelectedValues = filtervals//Floor and Category
};
//Apply the block
fd.FilterBlockDefinitions = new List<FilterBlockDefinition>() { fdef };
//Add the filter definition to the layer
bsl.SetFilter(fd);
//Set it active. The ID is auto-generated
bsl.SetActiveFilter(fd.ID);
});
The following shows the result of the filter applied:
The given collection of filters for a BuildingSceneLayer
can be accessed and manipulated through Get, Set, and Remove filter methods. Each filter definition, within the collection of filters for the layer, has its own unique filter id. The filter id is assigned when the filter is first created. Getting, setting, and removing various filters on a BuildingSceneLayer via the API are illustrated below:
//These methods must be called on QueuedTask...
//Assuming we have a building scene layer...
var bsl = ...;
//Get all the filters
var filters = bsl.GetFilters();
//Get all the filters that contain wireframe blocks (can be null in this example)
var wire_frame_filters = bsl.GetFilters().Where(
f => f.FilterBlockDefinitions.Any(
fb => fb.FilterBlockMode == Object3DRenderingMode.None));
//Get the active filter
var activeFilter = bsl.GetActiveFilter();
if (activeFilter != null) {
//TODO something with the active filter
//Clear the active filter
bsl.ClearActiveFilter();
//Get a specific filter
var id = ...;
var filter = bsl.GetFilter(id);
//or via Linq
var filter = bsl.GetFilters().FirstOrDefault(f => f.ID == id);
//Remove a specific filter
if (bsl.HasFilter(id))
bsl.RemoveFilter(id);
//Clear all filters
bsl.RemoveAllFilters();
FeatureSceneLayers currently support the I3S Point and 3D Object layer types. Point scene layers must be based off point data whereas 3D Object scene layers must be based off multipatch geometries. Point and 3D Object scene layers can only be added to to the 3D category of a scene. The derived property FeatureScenelayer.ShapeType can be tested to determine whether a FeatureSceneLayer type contains 3D points - esriGeometryType.esriGeometryPoint
or 3D objects (multipatch) esriGeometryType.esriGeometryMultiPatch
.
Point scene layers are often used to visualize large amounts of 3D data like trees, points of interest and contain both the point feature geometry and their attributes. Basically, any feature that can be represented by a point can be used in a point scene layer. Point scene layers are optimized for the display of large amounts of 3D point data in ArcGIS Pro web scenes, and runtime clients. Point scene layers are automatically thinned to improve performance and visibility at greater distances. Automatic thinning means that, as the camera position moves closer or further away, more or less corresponding features are drawn. All the features are drawn at the highest level of detail and the least number of features are drawn at the smallest level of detail. Levels of detail, or LODs, define which features are drawn at which distance between camera and features and are built in to the I3S specification, level of detail.
Point scene layers can be visualized with either 3D symbols or bill boarded 2D symbols (i.e. picture markers). You can retrieve attribute information about a selected point feature in a scene via popups and, in some cases, the standard API Search
and Select
methods same as with feature layers*. You can also snap to a point feature layer when snapping is enabled. Point feature attributes allow definition queries to specify symbology and other properties in lieu of setting properties for each object individually.
*Refer to Cached Attributes for more details on Search
and Select
.
A 3D object scene layer is typically used to represent assets including building shells, subsurface geological meshes, and schematic shapes representing 3D volumes and objects that may be derived from analytical operations. 3D object scene layers can be visualized with textures embedded into the 3D features and are created from multipatch features in ArcGIS Pro. 3D object scene layers are automatically thinned to improve performance and visibility. Same as with point scene layers, 3d object feature attributes can be queried via the API and support the use of definition queries.
Feature scene services can be created from three publishing workflows:
- A 3D point or multipatch feature classes is packaged as an .slpk and the .slpk is shared as content on portal or online (any scene layer can be published via use of an .slpk)
- A "by value" feature classes with geometry type of (3D) point or multipatch is shared as a web layer. Data is copied to the server to create a hosted feature service associated with the scene service.
- A "by ref" feature classes with geometry type of (3D) point or multipatch is shared as a web layer. Data is referenced on the server to create a feature service associated with the scene service.
Consult Publish from an ArcGIS Pro scene for more details. Consult registering data with arcgis server for more information on registering data.
When a scene layer is created, its attributes are added to an internal cache. Cached attributes are used by scene layers to support popups, renderers, filters, and query definitions. However, when a web scene layer is published from a feature dataset, the scene layer gets access to not just the cached attributes but to the attributes (stored in the feature dataset) for search and select.
Access to the feature dataset attributes in the geodatabase is enabled via an underlying feature service that is created on the feature dataset when it is published. The feature service is associated with the feature scene layer. When an associated feature service is present on the feature scene layer, the scene layer can support complex attribute and spatial queries via Search
and Select
API methods - both of which can be used to access row cursors to iterate over selected feature rows. Editing is also supported by the associated feature service assuming that editing was enabled when the web layer was published (via the web scene layer publishing pane "Configuration" tab).
Scene layer with cached attributes only | Scene layer with associated feature layer | |
---|---|---|
Renderer/Visual Renderer | Cached attributes are used | Cached attributes are used |
Filter | Cached attributes are used | Cached attributes are used |
Labels | Cached attributes are used | Cached attributes are used |
Popups | Cached attributes are used | Cached attributes are used |
Definition query | Cached attributes are used | Cached attributes are used |
Search | Not supported | Feature Service is used |
Select | Not supported | Feature Service is used |
After any edits on the scene service, the scene layer cache must be partially or fully rebuilt in order to propagate the edits from the feature services to the scene service cache. This is also true if changes are made to feature attributes and geometry via an enterprise geodatabase (referenced by the associated feature service and registered on the server) that need to be propagated to the cache. To rebuild the scene layer cache, use the "Manage Cache" option on the scene layer content item "Settings" tab in portal. Updating Scene Layers in ArcGIS Online provides more information on how to rebuild scene layers to pick up data changes.
More information on scene layer editing workflows can be found in edit a scene layer with an associated feature layer including rebuilding the web scene layer cache. If edits are made to attributes used in a query definition applied on the layer then the cache must be partially or fully rebuilt for the attribute edits to be reflected in the query definition.
The presence of an associated feature service can be checked using the featureSceneLayer.HasAssociatedFeatureService
property. If the property returns true then an associated feature service is present and Search
, Select
, and Editing
(if enabled) are supported. The following table lists the capabilities provided by a feature scene layer with and without the associated feature service:
Capability | With feature service | Without feature service |
---|---|---|
Definition queries | ||
Field definitions | ||
Renderer | ||
Select* | ||
Search | ||
Editing** |
*Selection via mapView.SelectFeaturesEx
and mapView.GetFeaturesEx
(which uses a geometry) is always possible whether there is an associated feature service or not. However, SelectFeaturesEx
and GetFeaturesEx
return lists of object ids, not features. The list of object ids are returned from the scene layer cache.
**Assumes editing is enabled on the associated feature service.
Note: Attempting to execute Search
or Select
on a scene layer when HasAssociatedFeatureService=false will throw a System.InvalidOperationException
exception. In the following examples, a selection is illustrated first using the Mapview and SelectFeaturesEx()
and/or GetFeaturesEx()
. HasAssociatedFeatureService is irrelevant in this case. Second, the same selection is executed but this time using Select()
and/or Search()
. In this case HasAssociatedFeatureService==true
is required otherwise a System.InvalidOperation
exception will ensue. Notice that after the selection, fully populated features can be retrieved from the row cursor same as from any feature class or feature layer.
Select features using MapView.Active.SelectFeaturesEx
. Retrieve object ids:
//var featSceneLayer = ...;
var sname = featSceneLayer.Name;
await QueuedTask.Run(() => {
//Select all features within the current map view
var sz = MapView.Active.GetViewSize();
var c_ll = new Coordinate2D(0, 0);
var c_ur = new Coordinate2D(sz.Width, sz.Height);
//Use screen coordinates for 3D selection on MapView
var env = EnvelopeBuilder.CreateEnvelope(c_ll, c_ur);
//HasAssociatedFeatureService does not matter for SelectFeaturesEx
//or GetFeaturesEx
var result = MapView.Active.SelectFeaturesEx(env);
//...or....
//var result = MapView.Active.GetFeaturesEx(env);
//The list of object ids from SelectFeaturesEx
var oids = result.Where(kvp => kvp.Key.Name == sname).First().Value;
//TODO - use the object ids
MapView.Active.Map.ClearSelection();
});
Select features using Select
or Search
. Retrieve features. Check HasAssociatedFeatureService
:
//var featSceneLayer = ...;
await QueuedTask.Run(() => {
if (!featSceneLayer.HasAssociatedFeatureService)
return;//no search or select
//Select all features within the current map view
var sz = MapView.Active.GetViewSize();
var map_pt1 = MapView.Active.ClientToMap(new System.Windows.Point(0, sz.Height));
var map_pt2 = MapView.Active.ClientToMap(new System.Windows.Point(sz.Width, 0));
//Convert to an envelope
var temp_env = EnvelopeBuilder.CreateEnvelope(map_pt1, map_pt2,
MapView.Active.Map.SpatialReference);
//Project if needed to the layer spatial ref
SpatialReference sr = null;
using (var fc = featSceneLayer.GetFeatureClass())
using (var fdef = fc.GetDefinition())
sr = fdef.GetSpatialReference();
var env = GeometryEngine.Instance.Project(temp_env, sr) as Envelope;
//Set up a query filter
var sf = new SpatialQueryFilter() {
FilterGeometry = env,
SpatialRelationship = SpatialRelationship.Intersects,
SubFields = "*"
};
//Select against the feature service
var select = featSceneLayer.Select(sf);
//...or...using(var rc = featSceneLayer.Search(sf))...
if (select.GetCount() > 0) {
//enumerate over the selected features
using (var rc = select.Search()) {
while (rc.MoveNext()) {
var feature = rc.Current as Feature;
var oid = feature.GetObjectID();
//etc.
}
}
}
MapView.Active.Map.ClearSelection();
});
The IntegratedMeshSceneLayer is designed for visualizing accurate representations of infrastructure and natural landscapes. Integrated mesh data is typically captured by an automated process for constructing 3D objects from large sets of overlapping imagery. The result integrates the original input image information as a textured mesh using a triangular interlaced structure. An integrated mesh can represent built and natural 3D features, such as building walls, trees, valleys, and cliffs, with textures.
Integrated mesh scene layers are typically focused on specific areas such as a building site (eg constructed from drone imagery) but can be created from citywide or global 3D mapping if required. Though integrated mesh scene layers do provide standard layer capabilities such as visibility, snappable, show legend, name, and so forth they do not have a specialized api and are not discussed further.
Point cloud scene layers allow for fast display and consumption of large volumes of point cloud data. The primary source of point cloud data is usually LAS, LAZ, ZLAS and lidar data. Lidar surveys for terrain, buildings, forest canopy, roads, bridges, overpasses, and more can make up the point cloud data used for a point cloud scene layer. Point cloud scene layers are scalable and can be rendered at an optimized point resolution for a given area. Point cloud color intensity, point density, and point size can also be varied. Refer to Extended Properties for more information.
PointCloudSceneLayers
can be filtered based on the underlying classification code(s) and/or return value(s) stored with the points in the point cloud. PointCloudSceneLayer filters are defined using a PointCloudFilterDefinition
that consists of 3 components: A value filter, a return value filter, and a flag filter (for a total of up to 3 individual filters on the layer).
- The value filter contains a list of classification codes that can be used to include features.
- The return value filter contains a list of return values that can be used to include features
- The flag filter contains a list of classification flags that can be used to include or exclude features. (Classification flags can also be added to the filter with a status of "ignore" meaning that they play no role in the filtering even though they are present in the filter).
Classification codes, return values, and classification flags are detailed below:
Classification codes:
Classification codes are used to define the type of surface, or surfaces, that reflected the lidar pulse. Classification codes follow the American Society for Photogrammetry and Remote Sensing (ASPRS)* for LAS formats 1.1, 1.2, 1.3, and 1.4 and include codes for building, ground, water, vegetation, and so on. The classification codes present on a given point cloud layer can be retrieved via Dictionary<int, string> pointCloudSceneLayer.QueryAvailableClassCodesAndLabels()
. Classification codes set in the list of classification codes on the filter are always included. A classification code that is not present in the list is assumed to be excluded by the filter. Classification codes are stored within the point cloud in its CLASS_CODE
field. Internally, the list of classification codes is converted to a CIMPointCloudValueFilter
.
*The complete set of available classification codes from ASPRS.
Return Values
When a lidar pulse is emitted from a sensor, it can have multiple return values depending on the nature of the surfaces the pulses encounter. The first returns will typically be associated with the highest objects encountered
(eg tops of trees or buildings) and the last returns with the lowest objects encountered (e.g. the ground). The return value is stored within the point cloud in its RETURNS
field.
The return values that can be specified are represented as PointCloudReturnType
enums. The available values are:
- All (default)
- Last
- FirstOfMany
- LastOfMany
- Single
The list of returns can include any combination of PointCloudReturnTypes. Note: Specifying "All" is equivalent to specifying all available return value types individually.
In this example, a filter definition is created excluding unclassified and low noise points and limiting the returned values to just "FirstOfMany"
//Must be called on the MCT
//var pcsl = ...;
//Retrieve the available classification codes
var dict = pcsl.QueryAvailableClassCodesAndLabels();
//Filter out low noise and unclassified (7 and 1 respectively)
//consult https://pro.arcgis.com/en/pro-app/help/data/las-dataset/storing-lidar-data.htm
var filterDef = new PointCloudFilterDefinition() {
ClassCodes = dict.Keys.Where(c => c != 7 && c != 1),
ReturnValues = new List<PointCloudReturnType> { PointCloudReturnType.FirstOfMany }
};
//apply the filter
pcsl.SetFilters(filterDef.ToCIM());
Classification flags
In many cases, when a classification is carried out on lidar data, points can fall into more than one classification category. In these cases, classification flags are specified in the lidar data to provide a secondary description or classification for the points. Classification flag values include Synthetic, key-point, withheld, and overlap (see below). The classification codes present on a given point cloud layer can be retrieved via Dictionary<int, string> pointCloudSceneLayer.QueryAvailableClassFlags()
. When a classification flag is specified, it is associated with a ClassFlagOption
which specifies:
- Include. The specified flag must be present on the data to be displayed
- Exclude. If the specified flag is present on the data then the data will not be displayed
- Ignore. The flag is ignored for the purposes of display filtering.
The complete set of flags and their description is as follow:
Flag | Description | Notes |
---|---|---|
0 | Synthetic | The point was created by a technique other than LIDAR collection such as digitized from a photogrammetric stereo model or by traversing a waveform |
1 | Key-point | The point is considered to be a model key-point and thus generally should not be withheld in a thinning algorithm |
2 | Withheld | The point should not be included in processing (synonymous with Deleted) |
3 | Overlap | The point is within the overlap region of two or more swaths or takes. Setting this bit is not mandatory (unless, of course, it is mandated by a particular delivery specification) but allows Classification of overlap points to be preserved. |
4 | Not used | -- |
5 | Not used | -- |
6 | Scan Direction* | The Scan Direction Flag denotes the direction at which the scanner mirror was traveling at the time of the output pulse. A point with the scan direction bit value set is a positive scan direction, otherwise it is a negative scan direction. Positive scan direction is a scan moving from the left side of the in-track direction to the right side. Negative scan direction is the opposite. |
7 | Edge of Flight Line* | The Edge of Flight Line data bit has a value of 1 only when the point is at the end of a scan. It is the last point on a given scan line before it changes direction |
*Some vendors include the scan direction and edge of flight flags in their data but it is uncommon.
In this example, an existing filter is modified to exclude any readings with the flag "edge of flight line" #7.
//Must be called on the MCT
//var pcsl = ...;
var filters = pcsl.GetFilters();
PointCloudFilterDefinition filterDef = null;
if (filters.Count() == 0) {
filterDef = new PointCloudFilterDefinition() {
//7 is "edge of flight line"
ClassFlags = new List<ClassFlag> { new ClassFlag(7, ClassFlagOption.Exclude) }
};
}
else {
filterDef = PointCloudFilterDefinition.FromCIM(filters);
//keep any include or ignore class flags
var keep = filterDef.ClassFlags.Where(cf => cf.ClassFlagOption != ClassFlagOption.Exclude).ToList();
//exclude edge of flight line
keep.Add(new ClassFlag(7, ClassFlagOption.Exclude));
filterDef.ClassFlags = keep;
}
pcsl.SetFilters(filterDef.ToCIM());
There are four different types of renderer that can be applied to a PointCloudSceneLayer: RGB, Stretch, Classify, and Unique Values. Each particular renderer type requires the presence of one or more particular fields be present on the underlying point cloud. The available fields for each renderer type can be queried using
pcsl.QueryAvailablePointCloudRendererFields(pointCloudRendererType)
. If no fields are returned (i.e. an empty list) then there are no fields available on the point cloud scene layer to support that renderer type. Setting an invalid renderer on the point cloud will disable rendering on the layer (until the invalid renderer is replaced).
The list of renderer types and associated fields are given in the table below.
Renderer | Supported Field(s) |
---|---|
StretchRenderer | ELEVATION, INTENSITY |
ClassBreaksRenderer | ELEVATION, INTENSITY |
RGBRenderer | RGB |
UniqueValueRenderer | CLASS_CODE, RETURNS |
The general pattern for defining a renderer is to create a PointCloudRendererDefinition
for the relevant type along with the particular field it will use for the rendering. Both the renderer type and the field name must be set:
//setting a definition for an RGB renderer
var rgbDef = new PointCloudRendererDefinition(
PointCloudRendererType.RGBRenderer) {
Field = "RGB"//assumes RGB field is present
};
Once a renderer definition has been defined, it is converted to a CIMPointCloudRenderer
using pcsl.CreateRenderer(pointCloudRendererDef)
and can be further modified and refined using the CIM. This is shown below in a couple of examples below. Point cloud renderers also provide additional capabilities to vary point color intensity, size, and density. These options are covered in the "Extended Properties" section.
RGB CIMPointCloudRGBRenderer
During capture, a point cloud sourced from lidar can be attributed with RGB bands (red, green, and blue) in a field value called RGB
. If this field is present on the point cloud, and an RGB renderer is specified, the stored rgb color will be used in the point rendering to visualize the points with their original scan color.
Stretch CIMPointCloudStretchRenderer
A stretch renderer associates a color ramp with a value range, read from either the ELEVATION
(default) or INTENSITY
fields to interpolate the color of each point. The range of colors on the color ramp are stretched between a specified cimPointCloudStretchRenderer.RangeMin
and cimPointCloudStretchRenderer.RangeMax
to "map" the relevant color to be used for each point.
In this example, a stretch renderer is created and applied to the point cloud:
var pcsl = MapView.Active.Map.GetLayersAsFlattenedList()
.OfType<PointCloudSceneLayer>().First();
await QueuedTask.Run(() => {
var fields = pcsl.QueryAvailablePointCloudRendererFields(
PointCloudRendererType.StretchRenderer);
//make the definition first
var stretchDef = new PointCloudRendererDefinition(
PointCloudRendererType.StretchRenderer)
{
//Will be either ELEVATION or INTENSITY
Field = fields[0]
};
//Create the CIM Renderer
var stretchRenderer = pcsl.CreateRenderer(stretchDef) as CIMPointCloudStretchRenderer;
//Apply a color ramp
var style = Project.Current.GetItems<StyleProjectItem>().First(
s => s.Name == "ArcGIS Colors");
var colorRamp = style.SearchColorRamps("").First();
stretchRenderer.ColorRamp = colorRamp.ColorRamp;
//Apply modulation
stretchRenderer.ColorModulation = new CIMColorModulationInfo()
{
MinValue = 0,
MaxValue = 100
};
//apply the renderer
pcsl.SetRenderer(stretchRenderer);
});
Class break CIMPointCloudClassBreaksRenderer
Class breaks define value ranges that map to specific colors. The values comes from either the ELEVATION
(default) or INTENSITY
fields of each point. Each specific "break" is defined as a CIMColorClassBreak
. Each CIMColorClassBreak
specifies the upper bound of the break (the preceeding break's upper bound is implicitly the lower bound) and the relevant color.
This example shows creating a classbreaks renderer with 6 breaks:
var pcsl = MapView.Active.Map.GetLayersAsFlattenedList()
.OfType<PointCloudSceneLayer>().First();
await QueuedTask.Run(() => {
var fields = pcsl.QueryAvailablePointCloudRendererFields(
PointCloudRendererType.ClassBreaksRenderer);
//make the definition first
var classBreakDef = new PointCloudRendererDefinition(
PointCloudRendererType.ClassBreaksRenderer) {
//ELEVATION or INTENSITY
Field = fields[0]
};
//create the renderer
var cbr = pcsl.CreateRenderer(classBreakDef) as CIMPointCloudClassBreaksRenderer;
//Set up a color scheme to use
var style = Project.Current.GetItems<StyleProjectItem>().First(
s => s.Name == "ArcGIS Colors");
var rampStyle = style.LookupItem(
StyleItemType.ColorRamp, "Spectrum By Wavelength-Full Bright_Multi-hue_2")
as ColorRampStyleItem;
var colorScheme = rampStyle.ColorRamp;
//Set up 6 manual class breaks
var breaks = 6;
var colors = ColorFactory.Instance.GenerateColorsFromColorRamp(
colorScheme, breaks);
var classBreaks = new List<CIMColorClassBreak>();
var min = cbr.Breaks[0].UpperBound;
var max = cbr.Breaks[cbr.Breaks.Count() - 1].UpperBound;
var step = (max - min) / (double)breaks;
//add in the class breaks
double upper = min;
for (int b = 1; b <= breaks; b++) {
double lower = upper;
upper = b == breaks ? max : min + (b * step);
var cb = new CIMColorClassBreak() {
UpperBound = upper,
Label = string.Format("{0:#0.0#} - {1:#0.0#}", lower, upper),
Color = colors[b - 1]
};
classBreaks.Add(cb);
}
cbr.Breaks = classBreaks.ToArray();
pcsl.SetRenderer(cbr);
});
Unique value CIMPointCloudUniqueValueRenderer
Unique values apply color to the point cloud via classification of an existing attribute such as CLASS_CODE
(default) or RETURNS
(number of return values). Each unique value and its corresponding color is defined as a CIMColorUniqueValue
within the renderer. This renderer is used to visualize points of the same type. It does not interpolate colors along a color ramp (e.g. as with Stretch and ClassBreak renderers).
Extended properties are used to control color intensity, point shape, size, and density across any point cloud renderer. Extended point cloud renderer properties must be set via the CIM. Access the CIM either via the standard layer GetDefinition()
method to retrieve the entire CIM scene layer definition or via the point cloud layer GetRenderer()
method to retrieve just the CIM renderer. If the entire definition is retrieved, the renderer can be accessed via the CIM definition Renderer
property.
Color Modulation
Color modulation is used to display scanned surfaces of a point cloud in a more realistic way by adjusting the point color to the intensity value captured by the scanner. Color modulation increases (or decreases) the contrast between points to help distinguish different parts of the scanned surface. Points with lower intensity values become darker whereas points with higher intensity values become brighter. The minimum value specifies the minimum intensity value at which points will be the darkest, and vice versa for the maximum intensity value. Color modulation is specified within the renderer by setting its ColorModulation
property to a CIMColorModulationInfo
.
Point Shape
Point shape is set on the renderer PointShape
property. It can be set to a value of PointCloudShapeType.DiskFlat
(default) or a value of PointCloudShapeType.DiskShaded
. If DiskFlat
is specified, 3D points are rendered flat, without shading. If DiskShaded
is specified, 3D points are rendered with shading giving an enhanced 3D appearance.
Point Size
Point size can be varied either by specifying a fixed size in map or pixel units or by increasing/decreasing the size of the points based on a percentage. Point size is adjusted using a CIMPointCloudAlgorithm
which is applied to the renderer PointSizeAlgorithm
property.
- To specify a fixed size, use a
CIMPointCloudFixedSizeAlgorithm
setting the fixed point size andUseRealWorldSymbolSizes = true
to use real world units forSize
andUseRealWorldSymbolSizes = false
to use pixels. - To specify a scale factor of 0 - 100% (set as a double varying from 0 to 1.0 for 100%) use a
CIMPointCloudSplatAlgorithm
. A minimum point size,MinSize
, can also be specified below which points will not be scaled.
Point Density Although not technically properties of the point cloud scene layer renderer, manipulating the point density does affect the rendering of the point cloud and is included here for completeness. Point density can be manipulated via two CIMPointCloudLayer properties:
- CIMPointCloudLayer.PointsBudget: Controls the absolute maximum number of points that will be displayed. Default is 1,000,000.
PointsBudget
corresponds to Display Limit on the Appearance tab. - CIMPointCloudLayer.PointsPerInch: Controls the maximum number of points that will be rendered per (display) inch. Default is 15.
PointsPerInch
corresponds to Density on the Appearance tab.
In the following example, the extended properties of the renderer are modified:
var pcsl = MapView.Active.Map.GetLayersAsFlattenedList()
.OfType<PointCloudSceneLayer>().FirstOrDefault();
await QueuedTask.Run(() => {
//Get the CIM Definition
var def = pcsl.GetDefinition() as CIMPointCloudLayer;
//Modulation
var modulation = def.Renderer.ColorModulation;
if (modulation == null)
modulation = new CIMColorModulationInfo();
//Set the minimum and maximum intensity as needed
modulation.MinValue = 0;
modulation.MaxValue = 100.0;
//Set the point shape and sizing on the renderer to be
//fixed, 8 pixels, shaded disc
def.Renderer.PointShape = PointCloudShapeType.DiskShaded;
def.Renderer.PointSizeAlgorithm = new CIMPointCloudFixedSizeAlgorithm()
{
UseRealWorldSymbolSizes = false,
Size = 8
};
//Modify Density settings - these are not on the renderer fyi
//PointsBudget - corresponds to Display Limit on the UI
// - the absolute maximum # of points to display
def.PointsBudget = 1000000;
//PointsPerInch - corresponds to Density Min --- Max on the UI
// - the max number of points per display inch to renderer
def.PointsPerInch = 15;
//Commit changes back to the CIM
pcsl.SetDefinition(def);
});