How is hierarchy assigned to levels generated by Contrast Split Segmentation?
I am trying to understand the concept of using a relation to super-object class feature in a conditional expression within an assign class algorithm applied to one of many levels generated with contrast split segmentation.
For context, I'm trying to fully understand a chunk of a preexisting rule set (below), where, if I understand correctly, a user took a Canopy Height Model (CHM) and ran six separate contrast split segmentation processes on it in order to classify the raster based on six height value ranges. A copy of the level representing the lowest height value range is copied and specified to sit above the original level it was created from. Finally, the assign class algorithm is used on this copied (lowest height) level five different times so that each of the other height range classes can be brought over to said copied level.
Some questions that came up in looking at this rule set:
1) What exactly does it mean to assign a class based on the condition of a super object in another level? In my example, I hope that I am correct in saying that when an object is classified via the contrast split segmentation algorithm (edge ratio contrast mode), the class assignment only exists within the new level simultaneously generated; thus, in this example, in order to have one level with all six height value range classes, this rule set used five independent processes of assign class based on the existence of super objects, each of which brought its respective class to the copy of the lowest height range level.
2) How is hierarchy determined among levels generated from the same CHM using the Contrast Split Segmentation algorithm? Is there one image object level hierarchy per project or can there be multiple? Can levels conceptually be in the same plane (i.e. on the same level)? I ask because I am trying to understand why in the assign class processes, the condition used is based on the existence of a super object instead of a sub object. Would I be correct in assuming the following Hierarchical Image Object Levels from the Contrast Split Segmentation and Copy Level processes?
Copy_Level_0.1_0.5_meter
Level_0.1_0.5_meter / Level_0.5-3.0_meter/ Level_3.0-4.5_meter / Level_4.5-6_meter / etc.
CHM (pixel Level)
3) A related question, How would one know what Distance Value to use in order to create a new existence of relation to super-objects? In this example, the assign class processes runs with a modified version of the following condition: Existence of super objects 0.5 - 3.0 meter (1) = 1 (that is referencing a preexisting Relational Feature created with: Feature View > Class features > Relations to super-objects > Existence of > Create new 'Existence of'). The subsequent assign class processes in the rule set use a distance value that increases by one in each step (e.g. the second assign class process is: Existence of super objects 3.0 - 4.5 meter (2) = 1, etc.). I'm confused as to why the hierarchical distances would increase.
4) How is Value 2 evaluated in an assign class condition when Existence of super-object is used in Value 1? Is this a binary evaluation of either 1 (present) or 0 (not present) as in: Existence of super objects 4.5 - 6.0 meter (3) = 1?
Thank you in advance for the help!
Was this article helpful?
1 comment
Thanks for your question(s). I can say it is not easy to answer but I will give my best:
First, your rule set generates 6 image object levels and classify objects with fits into different height classes (i.e. GT 8.0 or 0.1 – 0.5 meter etc.) in the specific levels. I do not know why the rule set develop used here the ‘contrast split segmentation algorithm’, because I believe a ‘multi-threshold segmentation’ is also doing the job (easier and on one image object level) but this is not so important here.
All those levels and classifications inside the levels are used in the ‘Tree_Canopy_Classification’ rule set group to assign the classes to the image objects in the level ‘Copy_Level_0.1_0.5_meter’. If I understand the code correct, the rule set is doing this to “copy all results into on image object level”. To have a fully classified image object level. That is all.
But to answer your specific questions:
I feel your pain in understanding the ‘code’ of other rule set developers. It is not easy. eCognition offer so many possibilities to implement ideas to transform data into information. But this flexibility is on the other hand one of the eCognition strengths. Good is that a rule set can be tested, so that you can “see” what happened and you can check all the parameter etc.
I hope I could help, if not, no worries, please ask more/other/additional questions.
Cheers,
Christian