Monday, September 25, 2023
BestWooCommerceThemeBuilttoBoostSales-728x90

A new hybrid algorithm for intelligent detection of sudden decline syndrome of date palm disease – Scientific Reports


This study proposes a new hybrid fuzzy fast multi-Otsu K-Means (FFMKO) algorithm integrating the date palm image enhancement, robust thresholding, and optimal clustering for significant disease identification. The algorithm adopts a multi-operator image resizing cost function based on image energy and the dominant color descriptor, the adaptive Fuzzy noise filter, and Otsu image thresholding combined with K-Means clustering enhancements. Besides, we validate the process with histogram equalization and threshold transformation towards enhanced color feature extraction of date palm images. Figure 2 is the depiction of the proposed research methodology.

Algorithm FFMKO: Fuzzy Fast Multi-Otsu K-Means

Inputs: Problem Set P, Disease Classes B, Preprocess processes C, FMO methods

Outputs: Final report R of each problem P

  1. 1.

    Define P problems from S problem space where each \({{\varvec{S}}}_{j}\) = \((\text {s}_{1j}\), \(\text {s}_{2j}\), \(\text {s}_{3j}\)), where j \(\in\) {1,2,…,N}.

  2. 2.

    Identify disease class B such that \({{\varvec{B}}}_{q}\) = \((\text {b}_{1}\), \(\text {b}_{2}\), \(\text {b}_{q}\)), where q \(\in\) {1, 2, …,N} based on the available disease types for each problem domain.

  3. 3.

    Prepare/preprocess C data modules \({{\varvec{C}}}_{p}\) = \((\text {C}_{1}\), \(\text {C}_{2}\), \(\text {C}_{p}\)), where p \(\in\) {1, 2, …,N} depending on data enhancement, normalization, smoothing, augmentation, filtration, etc.

  4. 4.

    For each Pj problem, identify FMO methods such that each Pj charts one or more FMO methods.

  5. 5.

    For each FMO method, use random data sampling.

  6. 6.

    Validate each FMO on P with required performance parameters \({{\varvec{PP}}}_{a}\) = \((\text {PP}_{1}\), \(\text {PP}_{2}\), \(\text {PP}_{a}\)), where a \(\in\) {1, 2, …,N}.

  7. 7.

    Trace and pick the significant FMO method based on performance parameters PP.

  8. 8.

    If step 7 satisfies, and authenticates disease types \({{\varvec{B}}}_{q}\) = \((\text {b}_{1}\), \(\text {b}_{2}\), \(\text {b}_{q}\)), where q \(\in\) {1, 2, …,N}. Produce the final report \({{\varvec{R}}}_{j}\) = \((\text {r}_{1j}\), \(\text {r}_{2j}\), \(\text {r}_{3j}\)), for each \({{\varvec{P}}}_{j}\) = \((\text {p}_{1j}\), \(\text {p}_{2j}\), \(\text {p}_{3j}\)), where j \(\in\) {1, 2, …,N}, otherwise revisit step 4

Figure 2

Proposed research methodology.

Let S be the sample space containing random data points, and X be the random variable. We define the mapping as \(\text {X}:\text {S} \rightarrow \text {R}\). Where S is the set of the totality of outcomes of the random experiment and X is the function by domain S makes correspondence in terms of real numbers to each outcome. X(t) defines the outcome of the events in the random experiment, here following are the outcomes of the random experiment. \(\text {X}(\text {t}_{1})= \text {Disease detection at the first sample}\) for \(\text {t}_{1}=1 \le \text {t}_{1} \le \text {J},\,\text {X}(\text {t}_{2})= \text {Disease detection at the second sample}\) for \(\text {t}_{2}=\text {J}+1\le \text {t}_{2}\le \text {K},\,\text {X}(\text {t}_{3})= \text {Disease detection at the third sample}\) for \(\text {t}_{3}=\text {K}+1\le \text {t}_{3}\le \text {L},\,\text {X}(\text {t}_{4})= \text {Disease detection at the fourth sample}\) for \(\text {t}_{4}=\text {L}+1\le \text {t}_{4}\le \text {M}\). \(\text {X}(\text {t}_{i}\)) Where \(\text {t}_{i}\) represents Date leaves with detection at every stage in the random experiment, \(\text {t}_{i}\,\in \text {N}\) and J = 335, K = 660, L = 878 and M = 1220. Furthermore, the simple random method is used for the selection of known Date palms Mean with Eq. (1). Where ME is a margin of error, alpha is used to estimate the confidence level, z is used to check the standard score, N is used for the actual size of the leaf, and \(\sigma\) is used as input to get the variance of the leaf images. Using equation 3293 leaf images are formed for database development.

$$\begin{aligned} N=\left\{ z^2*\sigma ^2* \left[ \frac{N}{N-1} \right] \right\} \Bigg /\left\{ ME^2+\left[ Z^2*\frac{\sigma ^2}{N-1}\right] \right\} \end{aligned}$$

(1)

We adopt the image resizing proposed by Dong30. With the selected algorithm we use the operating cost method for image resizing using multi-operators resizing techniques like seam craving, image scaling, and cropping technique. The unwanted data in the input image such as extra width and length except the RoI eliminated by using a lower-cost method. During the process of image resizing we formulate the operator cost function with the combination of the image energy and the dominant color descriptor and calculate image parameters as defined in Eq. (2). The detailed process of image resizing is described in30 while Fig. 3 is the description of original image along with its resized images.

$$\begin{aligned} E(s)=\frac{1}{N_{s}}\sum _{i=1}^{N_{s}} e(S_{i}) + \max _{1\le i\le N_{s}} e(S_{i}) \end{aligned}$$

(2)

Where ‘\(\text {s}_{i}\)’ represents one pixel in the operation field s, \(\text {N}_{s}= ||\text {s}||\) is the total number of pixels in s.

Figure 3
figure 3

Original and resize images of date palm leaf.

During the image acquisition process, we use different digital devices such as Mobile Phone cameras, DSLR cameras, etc. Whereas the images captured with a mobile phone camera bring a lot of noise. Hence, to tackle this issue we prefer a DSLR camera for image acquisition that brings the least noise comparatively. Since the captured images are less noisy yet the noise in images still provides granular effects and spots which are considered part of the image data. For image segmentation and feature extraction, we need a completely noise-free image. We adopted a fuzzy filter to reduce the amount of noise in the input images in this research. The detailed process of the adopted filter is discussed in31. Our input image consists of an N × M matrix (a two-dimensional array) of elements where each element of the matrix represents the brightness level and color information. The brightness value of the pixel replaces by a new value d (0–255). However, the noise in the image demonstrates a noise appearance in each pixel (i, j) which has a probability of ‘p’. Let {xi, j} be a distorted image. Then

$$\begin{aligned} X_{i,j} = \left\{ d\ with\ probability\ p,\ s_{i,j}\ with\ probability\ (1-p) \right\} \end{aligned}$$

(3)

Where ‘\(\text {X}_{i,j}\)’ represents the random variable, which can have different values based on the assigned probabilities, ‘d’ represents one possible value that ‘\(\text {X}_{i,j}\)’ can take, ‘\(\text {s}_{i,j}\)’ is another possible value that \(\text {x}_{i,j}\)can take.‘\(\text {S}_{i,j}\)’ is the output brightness of the pixel (i, j). While ‘p’ is the probability associated with the value “d”. It indicates the likelihood of \(\text {x}_{i,j}\) taking the value “d”. The value of “p” should be between 0 and 1 and ‘(1 − p)’ represents the complementary probability to “p”. In other words, it’s the probability associated with \(\text {x}_{i,j}\) taking the value “\(\text {s}_{i,j}\)”. Since there are only two possible outcomes, the sum of “p” and “(1 − p)” should equal 1. Equation (3) defines a random variable \(\text {x}_{i,j}\) that can take on two values, “d” or “\(\text {s}_{i,j}\)”. The probability of it being “d” is given by “p,” while the probability of it being “\(\text {s}_{i,j}\)” is given by “(1 − p)”. Hence, the addition of black and white values of brightness depends on the value of ‘d’ i.e. d = 0 or d = 255.

$$\begin{aligned} g(x,y)=f(x,y)+\eta (x,y) \end{aligned}$$

(4)

Where f(x, y) is an input image; g(x, y) is a noisy image; \(\eta\)(x,y) is an additive and independent noise with Gaussian or other distribution of probability density function.

Moreover, we also form an individual class based on fuzzy logic filters to remove more noise from the input image and make it a completely noisy-free image. The class calculates the average value of the central pixel with the reference of neighbor pixel values. This structure makes the perception of boundary on the account of the key object and its color components which then, during the process of distortion, remains undistorted. Filter class calculates the 2-D distance of various color components of the input image which enables the filter to differentiate among the color values of key objects and noise in the image. It is the primary function of this filter class. Figure 4a shows the original image of Date palm leaves with noise and Fig. 4b shows a noise-free image.

Figure 4
figure 4

Noisy and noise-free images.

After the necessary pre-processing operations, we calculate the image histogram to describe the color components of the input image. Using Fuzzy Color Histogram, we calculate the number of occurrences of each color component in the input image. Given a color space, the color histogram is represented as \(\textit{H(I)} = [\textit{h}_{1},\,\textit{h}_{2},\, \textit{h}_{3},\,\ldots ,\textit{h}_{n}]\), where ‘I’ represents an input image, h\(_{i}\) is the probability a particular pixel in the image then \(\textit{h}_{i} = \textit{N}_{i}/\textit{N}\) is the probability of I \(\sum\) \(\text {i}{th}\) color space. The total probability is defined as:

$$\begin{aligned} h_{i}=\sum _{j=1}^N \left( P_{i|j} P_j\right) =\frac{1}{N_{s}}\sum _{i=1}^{N} \left( P_{i|j}\right) \end{aligned}$$

(5)

The probability of the \(\textit{j}{th}\) pixel in image I is represented as \(\textit{P}_{j}\) and is 1/N whereas \(\textit{P}_{i|j}\) represents the conditional probability of the \(\textit{j}{th}\) pixel of \(\textit{i}{th}\) color space. The graphs of individual color components are generated in this histogram section and stored in a database for future use. Figure 5 is the depiction of generated graphs of individual color components.

Figure 5
figure 5

Histogram representation of date palm leaf image.

In the next step, we use the image thresholding technique to create an image without background for segmentation where the ultimate focus is on the key objectives of the image i.e. RoI and the rest unwanted background data deleted. For image thresholding, we convert the input image into the equivalent grey-scaled image using the Otsu thresholding algorithm which performs automatic conversion of the binary-to-gray level image and then calculated the optimum threshold to separate both classes32. By doing so, the combined spread (intra-class variance) is minimal, or equivalently (inter-class variance) is maximal. It calculates the threshold that minimizes the intra-class variance (combined spread) or maximizes the inter-class variance. In other words, it finds the threshold that best separates the two classes. Before applying the Otsu thresholding algorithm, the input image is converted into its grayscale equivalent. Grayscale images represent each pixel’s intensity as a single value ranging from 0 (black) to 255 (white). The conversion is performed to simplify the thresholding process and work with a single channel instead of multiple color channels. The Otsu algorithm determines an optimal threshold value, T, which lies within the grayscale range of 0 to K − 1, where k represents the number of possible gray levels in the image (typically 256 levels for an 8-bit grayscale image). Let a gray-level image f take K possible gray levels 0, 1, 2, …, k − 1 defines an integer threshold, T, that lies in the gray-scale range of T lies between (0, 1, 2, …, k − 1). Like the simple comparison process, the thresholding technique is the same hence: each pixel value in f is compared to the threshold, T. Each pixel in the grayscale image, f, is compared to the threshold value, T. If the pixel value is greater than or equal to the threshold, it is classified as foreground, and if it is below the threshold, it is considered part of the background. The value of the corresponding pixel in the output binary image is generated from the above-made comparison. The algorithmic steps use for the image thresholding are:

  1. 1.

    Compute the histogram and probabilities of each intensity level.

  2. 2.

    Set up initial \(\omega _{i}\) and \(\mu _{i}\). Step through all possible thresholds.

  3. 3.

    Set through all possible thresholds t = 1……… maximum intensity.

    1. a.

      Update \(\omega _{i}\) weight and \(\mu _{i}\) (mean).

    2. b.

      Compute \(\sigma _b^2 (t)\) (variance)

  4. 4.

    The desired threshold corresponds to the maximum \(\sigma _b^2 (t)\).

  5. 5.

    Compute two maxima and two corresponding thresholds \(\sigma _b^2 (t)\) is the greater max and is the greater or \(\sigma _b^2 (t)\) equal maximum.

$$\begin{aligned} desired\,threshold=\frac{threshold_1 + threshold_2}{2} \end{aligned}$$

(6)

The process of feature extraction and image segmentation requires a background-free image. Figure 6 is an output image of the Thresholding process while further details about the gray-scale conversion and image thresholding are discussed in33.

Figure 6
figure 6

Image background removal.

Right after the background removal using the thresholding technique, the next step is color thresholding where an individual color component of the image such as R.G.B. extracted separately. As the SDS-infected leaf turned pale, yellowish, and brownish. Though except for the green part i.e. healthy part of the leaf the other area represents the diseased infected area. Hence, to calculate the infected part of the leaf, the green-colored area was deleted from the image with the help of the color thresholding technique and exempted the other two color values. Calculations of the other two-color components of the leaf define the infected part along with the description of its size occupied in the leaf. Subsequently, to measure the disease stage, we extract the yellowish part of the image. This extraction of the yellow and dried part of the leaf helps in calculating the infected percentage and that information leads to the decision-making process regarding the treatment and cure of the disease. Figure 7 depicts this part of the process which manifests the division and removal of the color features.

Figure 7
figure 7

Threshold date palm leaf image.

In the feature extraction phase, we extracted the color and texture features of an input image. During the color feature extraction, mainly color brightness and image intensity level were extracted. Values of color features help in the decision-making regarding the disease stage. During the color thresholding process, color features are also extracted but this color feature extraction is somehow different from that happening during thresholding. Here each pixel of the input image was analyzed and extracted RGB color values of each pixel. In Table 1 each box represents a single pixel of the Date palm leaf image consisting of an RGB color component which demonstrates that R, G, and B values of each component vary as per color complexion.

Table 1 Color component values of diseased infected part of leaf image.

Irrespective of the difference between primary as well as secondary colors, we extracted and calculated individual color values of each component of the pixel. This step is processed on every pixel of an input image. Separate variables are created for each color component as shown below through which we calculate the ratio of RGB.

$$\begin{aligned} \text {R}{:}\text {x}1,\,\text {G}{:}\text {x}2,\,\text {B}{:}\text {x}3 \end{aligned}$$

The information of each color component is calculated with an average formula which is:

$$\begin{aligned} (x1+x2+x3) / n(x) \end{aligned}$$

(7)

Where, x1, x2, and x3 variables represent RED, GREEN, and BLUE components of a particular pixel respectively. However, n(x) represents the total number of color components of a selected pixel. The average formula applied on all the selected images of each stage where the notable color ranges vary from stage to stage as shown below:

$$\begin{aligned} \text {Stage-1:}\,151.6863 – 157.2941 \\ \text {Stage-2:}\,158.3271 – 164.9572 \\ \text {Stage-3:}\,165.3928 – 171.3216 \end{aligned}$$

After the extraction of the color feature, generated values are stored in a database using variables. Texture feature extraction is performed as the next step using the Local Binary Pattern (LBP) method. Various Local binary patterns such as Extended Local Binary Pattern (ELBP), Completed Local Binary Pattern (CLBP), Rotation-Invariant Local Binary Pattern (RILBP) and Circular Local Binary Pattern (CLBP) applied to gain the desired results in texture feature extraction but failed to do so. As the state-of-the-art variants of traditional LBP have disadvantages like Increased computational complexity, Higher memory requirements, Sensitivity to noise and Lack of standardized parameters. Hence, based on the requirements a simple and conventional local binary pattern algorithm is implemented in this research work. In texture feature extraction we extract spatial variations of each pixel’s intensity where the input image is first converted into its equivalent gray-scaled image then extracted the values of each pixel of the gray-scaled converted image. On behalf of the extracted pixel values, a three-by-three matrix of pixels of the selected image is formed to apply LBP which helps in the extraction of texture values.

figure a

Image texture values calculated using below defined LBP formulae:

$$\begin{aligned} f(LBP) = \sum _{n=0}^7 s(i_n – i_c)2^n \end{aligned}$$

(8)

Where,

$$\begin{aligned} \text {i}_c= & {} \text {Centre Pixel value }\\ \text {i}_n= & {} \text {Neighbor Pixel value} \end{aligned}$$

Accordingly:

figure b

The value of \(s(i_n-i_c ) 2^n\) calculated using \(s(z) =\left\{ _{0, \ z < \ 0}^{1, \ z\ge \ 0}\right\}\)

However, the values from i\(_0\) to i\(_7\) calculate by:

For, n = 0

$$\begin{aligned} s(i_0 – i_c)2^n = s(42-41)2^0= s(1)2^0 = s(1)1 = s(1) \end{aligned}$$

(9)

By increasing the number of the neighboring pixels i.e.i\(_2\), i\(_3\), …,i\(_7\), the values for \(s(i_n – i_c)2^n\) for each neighboring pixel are:

$$\begin{aligned} s(i_1-i_c ) 2^n= & {} s(40-41)2^1 = \text {s}(-1)2 = \text {s}(-1)2 = \text {s}(-2) \\ s(i_2-i_c ) 2^n= & {} s(38-41)2^2 = \text {s}(-3)4 = \text {s}(-12)1 = \text {s}(-12) \\ s(i_3-i_c ) 2^n= & {} s(42-41)2^3 = \text {s}(1)8 = \text {s}(1)8 = \text {s}(8) \\ s(i_4-i_c ) 2^n= & {} s(41-41)2^4 = \text {s}(0)16 = \text {s}(0)16 = \text {s}(0) \\ s(i_5-i_c ) 2^n= & {} s(41-41)2^5 = \text {s}(0)32 = \text {s}(0)32 = \text {s}(0) \\ s(i_6-i_c ) 2^n= & {} s(39-41)2^6 = \text {s}(-2)64 = \text {s}(-2)64 = \text {s}(-128) \\ s(i_7-i_c ) 2^n= & {} s(40-41)2^7 = \text {s}(-1)128 = \text {s}(-1)128 = \text {s}(-128) \end{aligned}$$

and as a result, the equivalent conversion matrix is:

figure c

By applying:

$$\begin{aligned} s(z) =\left\{ _{0, \ z < \ 0}^{1, \ z\ge \ 0}\right\} , 1 > 00 \end{aligned}$$

(10)

We get an equivalent binary matrix for all the neighboring pixels (except the central pixel value i.e. i\(_c\)).

figure d

Calculating the decimal numbers of the generated binary matrix results in the generation of a central pixel value for the equivalent matrix.

figure e

In the next step, all the generated binary numbers are converted into decimal numbers using binary conversion formulae:

figure f

$$\begin{aligned} & = 0\text {x}2^0 + 0x2^1 + 1×2^2 + 0x2^3 + 1×2^4 + 0x2^5 + 0x2^6 + 1×2^7 \\ & = 4 + 16 + 128 \\ & = 148\,\text {(LBP Generated Code)} \end{aligned}$$

Put the generated value of the pixel into the matrix

figure g



Source link

Related Articles

Leave a Reply

Stay Connected

9FansLike
4FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles

%d bloggers like this: