search for


Development of Sand Volume Estimator for Under-Struck Excavator Bucket Using Single Camera
International Journal of Fuzzy Logic and Intelligent Systems 2018;18(4):254-262
Published online December 31, 2018
© 2018 Korean Institute of Intelligent Systems.

In-Hwan Kim, Dong-Woo Lim, and Jin-Woo Jung

Department of Computer Science and Engineering, Dongguk University, Seoul, Korea
Correspondence to: Correspondence to: Jin-Woo Jung, (
Received December 20, 2017; Revised November 21, 2018; Accepted December 15, 2018.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

To support the intelligence of construction environment, it is important to measure the workload of excavator in real time. But, previous studies are expensive to implement or can not processed in real time. In this paper, an image-based method to estimate the workload of the excavator bucket especially for the state of under-struck is proposed by assuming the shape of bucket and the shape of sand inside the bucket as geometric models. By analyzing the relation between single camera image and actual bucket geometry, the volume of sand which is proportional to the excavator workload is estimated. The experimental results show 93.5% accuracy even though only some part of sand region is unseen.

Keywords : Sand volume estimator, Excavator bucket modeling, Single camera
1. Introduction

It is not easy to measure in real time the amount of sand which an excavator load in working environment [1, 2]. Previous method to measure this workload in real environment was to measure the charge of weight of dump truck. In order to make this weight change, it is needed to stop the truck in a certain area. However, this method is inefficient in terms of time because it decreases the working efficiency by stopping the truck. This paper proposes a new method which can estimate automatically the volume of sand in a bucket by capturing an image from a single camera in the excavator and analyzing the image specifically focused on the under-struck state.

2. Background

Depending on the amount of sand piled up in a bucket of excavator, states could be divided like Figure 1. It is called as struck state if the volume of bucket and that of sand are equal. It is called as under-struck sate if the volume of sand is less than struck state. If the volume of sand is larger than struck state, it is called as heaped state. This paper focuses on under-struck state. The purpose of this paper is to estimate the amount of sand from the bucket image based on a single camera image processing [3, 4].

This paper assumes A1–A4 to estimate the volume of under-struck state.

  1. A1: The shape of bucket is modeled as a combination of half cylinder and right triangular prism.

  2. A2: The bucket diameter ‘2a’, bucket width ‘b’ and bucket teeth length ‘c’ are given by the excavator company.

  3. A3: When the image is captured, the upper side of bucket is horizontal to the ground.

  4. A4: In Figure 2, the pixel locations of points (P1 to P8) and the highest vertical line (U) can be calculated with image processing [5, 6].

3. Single Camera-based Sand Volume Estimation Algorithm for Under-Struck State

Under-struck state means that the highest line of sand U is below the line (Figure 3). This state could be divided into three cases in order to reduce the complexity of the formula. (Figure 2) The condition dividing the under-struck state is as follows:

  1. L1: line segment AB¯

  2. L2: parallel line segment with AB¯ passing through O

  3. L3: Parallel line segment with AB¯ passing through D

  4. If (U < L3) state ← Under-Struck-Case1

  5. Else if (U < L2) state ← Under-Struck-Case2

  6. Else if (U < L1) state ← Under-Struck-Case3

Experimental information

state Grain size (mm) Original weight (g) Actual volume (mL) Estimated volume (mL) Estimated weight (g) Error ratio (%)
case1 6 39.3 24.5 22.0 35.1 10.6
6 39.3 24.5 23.1 36.9 6.1

case2 6 104.9 60 59.1 94.6 9.8
6 104.9 60 65.2 104.3 0.5

case3 6 153.6 95 94.1 150.5 2.0
6 153.6 95 105.9 169.4 10.3

3.1 Determination of Under-Struck State Parameters

Specific parameters are required to estimate the volume in case of Under-Struck State. There are parameters that need to be known by default such as lw, lw,lw,lh,lh, and so on. These basic parameters can be obtained using each point (P1 ~ P8, U) through image processing.

In Figure 5,


The unit of parameters lw, lw,lh,lh are number of pixels. In order to calculate the volume, metric unit such as mm is needed instead of number of pixels. This conversion is possible by multiplying the actual pixel size of image sensor.

3.2 Calculation of Angle between the Camera Centre Line ZF¯ and the Bucket Line AB¯

We can not always take the picture at the same angle. That changes the parameter values of bucket. This paper use the expression of the angle between the camera centre line and the bucket to solve this problem. This is called θC. This section will describe the process of calculating θC.

In Figure 6, lets assume that


By the similarity between ΔEDD′ and ΔEC′F,

2acos θc:r1=d:(r1+r2)=d:(1+k)r1,k=d2acos θC-1.

In Figure 6,

L1=2asin θc-kr1=2asinθc-(d2acos θc-1)r1.

By the similarity between ΔFZY and ΔFC′B,

d:f=L1:l2,l2=fd[2asin θc-(d2acos θc-1)r1],d:f=(1+k)r1:l1=dr12acos θc:l1,l1=fr12acos θc.

lh is the height of bucket in Figure 5.

lh=l1+l2=fd(2asin θc+r1),r1=dlhf-2asin θc,l1=dlh2acos θc-ftan θc,l2=lh+ftan θc-dlh2acos θc.

l1 and l2 are decided based on the centerline of the camera image. Lets assume that the ratio of l2 and l1 is m.


By Eq. (15) and Eq. (19),

lh=2af(m+1)sin θc(m+1)d-2acos θc.

By Figure 5 and Figure 6,


By Eq. (20) and Eq. (22),

lh=2alwf(m+1)sin θc(m+1)bf-2alwcos θc,θc=cos-1(4a2lw2(lh2+(m+1)2f2)-b2lh2(m+1)2f22alw(lh2+f2(m+1)2)×f(m+1)-lh2(m+1)bf2alw(lh2+f2(m+1)2)).

3.3 Calculation of Angle θB between AB¯ and BD¯

In Figure 3, θB is fixed angle so it can be obtained by Eq. (25).


3.4 Calculation of Height h between the Bucket line AB¯ and DH¯

θy can be obtained by using θC and θB.


In Figure 7, lets assume that


The value k′ can be obtained by using the similarity of ΔEB′H′ and ΔEC′F.

hcos θy:r1=d:(r1+r2)=d:(1+k)r1,k=dhcosθy-1.

By the similarity between ΔFEC′ and ΔFY Z

d:f=dr1hcosθy;l2,l2=fr1hcos θy.

By the similarity between ΔFHC′ and ΔFXA.

d:f=L1:l1,L1=kr1-hsin θy=(dhcos θy-1)r1-hsin θy,l1=fd[(dhcos θy-1)r1-hsin θy].

In Figure 5,

lh=l2-l1=fd(hsin θy+r1),r1=dlhf-hsin θy.

By Eq. (34) and Eq. (36),

l1=dlhhcos θy-lh-ftan θy.

Let’s assume that


In Figure 5,


By Eq. (37), Eq. (40), and Eq. (42),

lh=lw(m-1)fhsin θy(m-1)bf-lwmhcos θy,h=lh(m-1)bflw(m-1)fsin θy+lhlwmcos θy,hl=h-ho,ho=asin θB,hl=lh(m-1)bflw(m-1)fsin θy+lhlwmcos θy-asin θB.

3.5 Estimation of Under-Struck State Volume

3.5.1 Estimation of sand volume for Under-struck-case 1 condition

The volume can be estimated by sector OXY and ΔOY X. Here, V (x) means volume made by expanding the area x along bucket width b.

V=V(sector OXY)-V(ΔOYX),θz=cos-1(hla).

By Eq. (49)

sector OXY=a2cos-1(hla),XY¯=2a2-hl2,ΔOYX=hla2-hl2,V=a2bcos-1(hla)-bhla2-hl2.
3.5.2 Estimation of sand volume for under-struck-case 2 condition

In under-struck-case 2, the formula for calculating the volume can be obtained by subtracting sector BOY and ΔYOC from sector BDO and adding ΔXDC. See Figure 9.

V=V(semicircle)-V(sector BOY)-V(ΔYOC)+V(ΔXDC).

First, to obtain sector BOY, we need to find u1 and u2.

u1=θB,u2=π2-cos-1(hla),sector BOY=a22(θB+π2-cos-1(hla)).

To find the area of ΔXDC, you need to find the base length and height. In Figure 10,

2acos θB:ho+asin θB=v2:asin θB-hl,2a:ho+asin θB=v3:asin θB-hl.

By Eq. (46),

v2=asin θB-hlsin θBcos θB,v3=a-hlsin θB,ΔXDC=12v3v22-v32=12tan θB(asin θB-hlsin θB)2.

In Figure 10,


In Figure 9 and Figure 11,

ΔYOC=12hl(a2-hl2+hlcos θBsin θB),

The total volume can be obtained by summing each areas.

V=πa2b2-a2b2(θB+π2-cos-1(hla))-bhl2(a2-hl2+hlcos θBsin θB)+b2tan θB(asin θB-hlsin θB)2.
3.5.3 Estimation of sand volume for under-struck-case 3 condition

The equation for obtaining the volume in under-struck-case 3 can be obtained by adding ΔY′YO and ΔXY′D in the semicircle-subtracted sector BY O. See Figure 12.

V=V(semicircle)-V(sector BYO)+V(ΔYYO)+V(ΔXYD).

In Figure 13,

2a:2asin θB=2a-v5:2asin θB-h,2acos θB:2asin θB=v6:2asin θB-h,v5=hsin θB,v6=2asin θB-hsin θBcosθb,ΔXYD=12tan θB(2asin θB-hsin θB)2.

In Figure 14,

v8=(a-v5)2-(asin θB-h)2,v7=a2-(asin θB-h)2-v8,v7+v8=a2-(asin θB-h)2.

In Figure 15,

θB-u3=cos-1(a2-(asin θB-h)2a),u3=θB-cos-1(a2-(asin θB-h)2a),sector BYO=a22(θB-cos-1(a2-(asin θB-h)2a)).

In Figure 16,

ΔYYO=12(asin θB-h)(a2-(asin θB-h)2-asin θB-htan θB),V=πa2b2+b2tan θB(2asinθB-hsin θB)2-a2b2(θB-cos-1(a2-(asin θB-h)2a))+b2(asin θB-h)(a2-(asin θB-h)2-asin θB-htan θB).
4. Experimental Results

Several experiments were performed to confirm the accuracy of the algorithm presented in this paper. The volume expression contains a parameter called focal length. This focal length parameter may or may not appear in the H/W specification of commercial camera. Here, we assumed that the focal length is not given and try to find by experiments using pre-known lw. The estimated focal length was 19.0011 mm as Table 1 when we use SPC-B900W camera. The size of image sensor in the camera was 21.12 mm × 11.88 mm.

The experiment to find the focal length was done by taking a picture of an object with pre-known size at a pre-known distance. Then the focal length is found through the number of image pixels and pre-known size and distance of the object.

The experiment to find the density of a ball was done by finding the volume and weight of a fixed number of 6 mm balls. The density is determined by dividing the weight by the measured volume.

The sand used in the experiment is a type of ball with 6mm diameter. The method to estimate the volume is as follows: The original weight is obtained through an electronic scale. Actual Volume is the volume measured when pouring water into the beaker and then pouring balls there. Estimated volume is obtained through the proposed algorithm. Estimated weight is obtained by multiplying the estimated volume by the average 1.6g/ml density obtained in Table 2.

The experimental results are shown in Table 3. The error rate of case 1 was 10.6% and 6.1%, the error rate of case 2 was 9.8% and 0.5%, and the error rate of case 3 was 2.0% and 10.3%. Most of errors may be from the uneven surface of the sand region made by the fixed size of ball.

5. Conclusions

In this paper, we proposed a novel method to estimate the amount of sands in the excavator bucket based on a single camera by using the image processing technique and mathematical modeling of bucket. For each of three under-struck states, a closed form of mathematical solution to estimate the sand volume of excavator bucket was implemented. The experimental results show that the error rate is within 10.6% and the minimum error rate is 0.5% in a case of under-struck-case 2.


This research was partially supported by the MIST (Ministry of Science and ICT), Korea, under the national program for excellence in SW supervised by the IITP (Institute for Information & Communications Technology Promotion) (2016-0-00017) and partially supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2015R1D1A1A09061368) and supported by the KIAT (Korea Institute for Advancement of Technology) grant funded by the Korea Government (MOTIE : Ministry of Trade Industry and Energy) (No. N0001884, HRD program for Embedded Software R&D).

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Fig. 1.

Three representative states of sands in the bucket. (a) under-struck state, (b) struck state, and (c) heaped state.

Fig. 2.

Bucket image.

Fig. 3.

Bucket modeling.

Fig. 4.

Three cases for under-struck state.

Fig. 5.

Default parameters for under-struck state.

Fig. 6.

Camera geometry for calculating θC.

Fig. 7.

Camera geometry for calculating h.

Fig. 8.


Fig. 9.


Fig. 10.


Fig. 11.


Fig. 12.


Fig. 13.


Fig. 14.

Sector BYO-1.

Fig. 15.

Sector BYO-2.

Fig. 16.


Fig. 17.

Under-struck-case1 image.

Fig. 18.

Under-struck-case2 image.

Fig. 19.

Under-struck-case3 image.


Algorithm 3.1


P1, …, P6 : Six trapezoidal points of the bucket
U : The uppermost edge point of the sand region in photographed image
V1 : Center point of P1P2¯
V2 : Center point of P3P6¯
V3 : Center point of P4P5¯
C : Intersection point of line segment V1V3¯ and the center line of the image
D1 : Intersection of P1P3¯ and P2P6¯
D2 : Intersection of P1P4¯ and P2P5¯

V : The volume of sand accumulated in the bucket

Volume_Estimator(P1, …, P6, U, V1, V2, V3, C, D1, D2)
1 {
2 mV1C¯V3C¯,mV1C¯UC¯,lwP1P2¯,lhV1U¯,lhV1V3¯,lwP7P5¯
3 θCcos-1(f(m+1)4a2lw2{lh2+(m+1)2f2}-b2lh2(m+1)2f2-lh2(m+1)bf2alw(lh2+f2(m+1)2))
4 hlh(m-1)bflw(m-1)fsin θy+lhlwmcos θy
5 hlh(m-1)bflw(m-1)fsin θy+lhlwmcos θy
6 IF(hh0 + a sin θB) THEN
7  state ← Under-Struck-Case1
8 ELSE IF (h0 + a sin θB > hh0) THEN
9  stae ← Under-Struck-Case2
10 ELSE IF (h < h0) THEN // h0 = a sin θB
11  state ← Under-Struck-Case3
13 IF (state=Under-Struck-Case1) THEN
14 Va2bcos-1(hla)-bhla2-hl2
15 ELSE IF (state=Under-Struck-Case2) THEN
16 Vπa2b2-a2b2(θB+π2-cos-1(hla))-bhl2(a2-hl2+hlcos θBsin θB)+b2tan θB(asin θB-hlsin θB)2
17 ELSE IF (state=Under-Struck-Case3) THEN
18 Vπa2b2-b2tan θB(2asin θB-hsin θB)2-a2b2(θB-cos-1(a2-(asin θB-h)2a))+b2(asin θB-h)(a2-(asin θB-h)2-asin θB-htan θB)
21 }

Table 1

Focal length experiment

Length between object and camera lens (mm) Object length (mm) Image pixel length (pixel) Focal length (mm)
450 113 441 19.3181
400 113 499 19.4301
350 113 559 19.0456
300 113 652 19.0407
250 113 783 19.0553
200 113 963 18.7487
150 113 1258 18.3690
Average focal length 19.0011

Table 2

Density experiment of object

Number of balls Volume (mL) Weight (g) Density (g/mL)
10 1.25 2.0 1.6
20 2.5 4.0 1.6
30 3.75 6.0 1.6
40 5 8.0 1.6
Average density 1.6

Table 3

Experimental information

state Grain size (mm) Original weight (g) Actual volume (mL) Estimated volume (mL) Estimated weight (g) Error ratio (%)
case1 6 39.3 24.5 22.0 35.1 10.6
6 39.3 24.5 23.1 36.9 6.1

case2 6 104.9 60 59.1 94.6 9.8
6 104.9 60 65.2 104.3 0.5

case3 6 153.6 95 94.1 150.5 2.0
6 153.6 95 105.9 169.4 10.3

  1. Ahn, SH, Kim, SK, and Lee, KH (2016). Development of a fleet management system for cooperation among construction equipment. Journal of the Korea Society of Civil Engineers. 36, 573-586.
  2. David, F, Petr, P, Miroslav, M, and Milan, M 2016. Scanning of trucks to produce 3D models for analysis of timber loads., Proceedings of 17th International Carpathian Control Conference (ICCC), Tatranska Lomnica, Slovakia, Array, pp.194-199.
  3. Vrublova, D, Kapica, R, and Jurman, J (2012). Methodology devising for bucket-wheel excavator surveying by laser scanning method to determine its main geometrical parameters. Geodesy and Cartography. 38, 157-164.
  4. Won, JU, Chung, YS, Kim, WS, You, KH, Lee, YJ, and Park, KH (2002). A single camera based method for cubing rectangular parallelepiped objects. Journal of KIISE: Computing Practices and Letters. 8, 562-573.
  5. Baek, YH, and Moon, UR (2006). Color edge detection using variable template operator. International Journal of Fuzzy Logic and Intelligent System. 6, 116-120.
  6. Xiong, Xing, and Choi, Byung-Jae (2013). Comparative analysis of detecting algorithms for corner and blob features in image processing. International Journal of Fuzzy Logic and Intelligent System. 13, 284-290.

In-Hwan Kim has been under M.S. candidate course at Dongguk university, Korea, since 2016. His current research interests include robotics and intelligent human-robot interaction.

E-mail :

Dong-Woo Lim has been under M.S. candidate course at Dongguk university, Korea, since 2018. His current research interests include intelligent human-robot interaction and image processing.

E-mail :

Jin-Woo Jung received the B.S. and M.S. degrees in electrical engineering from Korea Advanced Institute of Science and Technology (KAIST), Korea, in 1997 and 1999, respectively and received the Ph.D. degree in electrical engineering and computer science from KAIST, Korea in 2004. Since 2006, he has been with the Department of Computer Science and Engineering at Dongguk University, Korea, where he is currently a Professor. During 2001–2002, he worked as visiting researcher at the Department of Mechano-Informatics, University of Tokyo, Japan. During 2004–2006, he worked as researcher in Human-friendly Welfare Robot System Research Center at KAIST, Korea. During 2014, he worked as visiting scholar at the Department of Computer and Information Technology, Purdue University, USA. His current research interests include human behavior recognition, multiple robot cooperation and intelligent human-robot interaction.

E-mail :