Example: Anisotropic Diffusion - codeplaysoftware/visioncpp GitHub Wiki
The code for this example is available here.
This tutorial will show how to implement a simplified version of the Perona-Malik Anisotropic Diffusion [1]. This technique is capable of smoothing the image preserving the edge information. We can use this technique to get better edge detection and segmentation.
It is a non-linear filter and it smooths the image based on the pixels surrounding a pixel. It uses an exponential function to determine how much smoothness it will apply in the region.
The equation below shows the equations to implement the simplified version of the Perona-Malik Anisotropic Diffusion.
![perona_malik_equations] (https://raw.githubusercontent.com/wiki/codeplaysoftware/visioncpp/images/perona_malik_equations.png)
It is an iterative method where I0(x,y) is the original image. There are two parameters:
- k: This parameter is the factor in the exponential function related to the edge detection, the bigger the value, more regions will be considered edges.
- Number of iterations: This parameter says how many times we are going to apply the filtering in the image. The more we apply, smoother the image gets, but preserving edges.
For the implementation of this algorithm in VisionCpp we translate the the equations into C++ code. In the code below we can see our functor which implements the simplified version of the anisotropic diffusion. We use the cl::sycl::float4 data structure since it has implemented the operator overloading for basic math operations.
// operator which implements the simplified anisotropic diffusion
struct AniDiff {
template <typename T>
visioncpp::pixel::F32C3 operator()(T nbr) {
// init output pixel
cl::sycl::float4 out(0, 0, 0, 0);
// init sum variable, which is used to normalize
cl::sycl::float4 sum_w(0, 0, 0, 0);
// get center pixel
cl::sycl::float4 p1(nbr.at(nbr.I_c, nbr.I_r)[0], nbr.at(nbr.I_c, nbr.I_r)[1], nbr.at(nbr.I_c, nbr.I_r)[2], 0);
// iterate over a 3x3 neighbourhood
for (int i = -1; i <= 1; i++) {
for (int j = -1; j <= 1; j++) {
// get neighbour pixel
cl::sycl::float4 p2(nbr.at(nbr.I_c + i, nbr.I_r + j)[0], nbr.at(nbr.I_c + i, nbr.I_r + j)[1], nbr.at(nbr.I_c + i, nbr.I_r + j)[2], 0);
// computes the weight from the anisotropic diffusion equation
cl::sycl::float4 w = cl::sycl::exp((-k) * cl::sycl::fabs(p1 - p2));
// sum the weights for normalization
sum_w += w;
// store the output
out += w * p2;
}
}
// normalize output and return
out = out / sum_w;
return visioncpp::pixel::F32C3(out.x(), out.y(), out.z());
}
};
This example is an iterative method, as seen in the equation, so we can use the device only memory from VisionCpp. The device only memory stores temporary results on the device meaning that the result is not transferred to the host.
After creating the functor for the anisotropic diffusion, we can split the creation of the expression tree in 4 steps:
- Creation of terminal nodes
- Convert input data to float
- Apply the Anisotropic Diffusion
- Copy device memory to host
Like the other VisionCpp tutorials first we need to initialize our terminal nodes that contain the input, the output data and the device only memory. The device only memory will be used to assign all temporary computations.
// input terminal node
auto in_node =
visioncpp::terminal<visioncpp::pixel::U8C3, COLS, ROWS,
visioncpp::memory_type::Buffer2D>(input.data);
// output terminal node
auto out_node =
visioncpp::terminal<visioncpp::pixel::U8C3, COLS, ROWS,
visioncpp::memory_type::Buffer2D>(output.get());
// device only memory
auto device_memory =
visioncpp::terminal<visioncpp::pixel::F32C3, COLS, ROWS,
visioncpp::memory_type::Buffer2D>();
Then we need to convert the input data to float.
// convert to float
auto frgb = visioncpp::point_operation<visioncpp::OP_U8C3ToF32C3>(in_node);
// assign to temporary device memory
auto exec1 = visioncpp::assign(device_memory, frgb);
The next step is to create the nodes to apply the anisotropic diffusion. The neighbour_operation receives the functor as the first argument and the top/left/right/bottom halos. For that algorithm we just need one neighbour pixel as halo.
// apply anisotropic diffusion
auto anidiff =
visioncpp::neighbour_operation<AniDiff, 1, 1, 1, 1>(device_memory);
// assign to the temporary device memory
auto exec2 = visioncpp::assign(device_memory, anidiff);
In the last step we need to assign the device data to host.
// convert to uchar
auto urgb =
visioncpp::point_operation<visioncpp::OP_F32C3ToU8C3>(device_memory);
// assign to the host memory
auto exec3 = visioncpp::assign(out_node, urgb);
After creating the expression tree, we can execute it. For the anisotropic diffusion computation we create a loop which is a parameter that controls how blurry the image will be. So when we run the exec1 and exec2, all the computation is done on the device. And the exec3 will finally transfer the device data to host which can be used for displaying the result.
// execution (convert to float)
visioncpp::execute<visioncpp::policy::Fuse, 32, 32, 16, 16>(exec1, dev);
// apply anisotropic diffusion several times
for (int i = 0; i < iters; i++) {
visioncpp::execute<visioncpp::policy::Fuse, 32, 32, 16, 16>(exec2, dev);
}
// return image to host memory
visioncpp::execute<visioncpp::policy::Fuse, 32, 32, 16, 16>(exec3, dev);
The steps above computed a simplified version of the anisotropic diffusion. The images below show the results. As we can see, it smooths the image preserving the edge information.
Reference Image2 |
---|
Anisotropic Diffusion |
---|
[1]: Perona, Pietro, and Jitendra Malik. "Scale-space and edge detection using anisotropic diffusion." IEEE Transactions on pattern analysis and machine intelligence 12.7 (1990): 629-639.