B [email protected] Tel: 0717201764 Abstract—This paper presents an image

B . M . M
Buddhika

Department
of Electrical Engineering, University of Moratuwa,

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Colombo,
Sri Lanka

E-mail: [email protected]

Tel:
0717201764

 

 

Abstract—This paper presents an image based visual servoing of 3
degrees-of-freedom (D.O.F) manipulator with 2D information. Visual servoing and
robot manipulator control is constructed as single algorithm with coupled
operation. In this research image information obtained from camera is
transformed into angle information by using forward and inverse kinematics and
results are transferred to control module in order to manipulate robot arm. Experiment
results are proofs of success.

keywords— visual
servoing,degree of freedom,human robot interaction

                                                                                                                                                        
I.      Introduction

At
present large number of older and disabled people
with vision problems in eyes and movement issues in hands and legs are
expecting modern technological solutions in robotics and image processing. Those
solutions to be involved with advanced maneuverability in safety, smooth,
accurate and Comfortable. Robots to be developed to uplift the living standard
of them by creating robots arm like human hand that has ability to coincide
with object in Front of eyes. Those robots should have various capabilities
such as object manipulations, navigation etc. Azimo humanoid robot has many
human like behaviors for H-R interaction in convenience of humans.  Within those ability of object manipulation
plays a major role in human-robot interaction. VM1 In
Romeo robot object grasping 18 illustrates
the basics of object manipulation combined with visual servoing MB2 MB3 of H-R interaction.

Therefore, aAt
present object manipulation with comfortable human-robot interaction is popular
research topic worldwide. But there are considerable amount of
drawbacks with modern technics. Handover procedure sis independent of pose of
hand 7, delivering process is not comfortable always, location error increases when
the object position is far of the calibration area 7, different types of task errors have
different error magnitudes provided the greatest challenge 8 and fully
specifying tasks requires many actions by the user. While choosing the exact
geometric constraints 8 and D.O.F F of
the robot is not straightforward always.

Visual servo control
techniques 17 allow the guidance
of a robotic system using visual
informationRequirement of visual servoing for
object manipulationsVM4 / hand
over applications. It also controls a
robot manipulator to track 17 desired image
trajectories taking explicitly into account the robot dynamics.
Visual servoing has been a very active research subject
for the past three decadesVM5 MB6 
1. The term “vision servoing” appears to have been introduced by Hill and
Park in 1979 to distinguish their approach from earlier experiments where the
system alternated between picture taking and moving. With the progress in
electronic hardware requirementMB7 
of machine vision system is realized 6.

Scope of visual servoing applications are spreading
from simple “pick and place” robot to an advanced manufacturing robot-team 7
8. This is the fusion of many active research areas which includes high speed
image process, kinematics, dynamics, control theory and real-time computation.
There are many kinds of robotic systems, but the robot arm is the one most
used. Such as car assembly plants, humanoid robots etc… The use of robot arms
is an important tool in the manufacturing process. Robot arms to be controlled
according to the target positions and designed to acquire stability and
precision 1. As the recognition technology has improved in a variety of ways,
robots have become more human-like 2. Robots now offer valuable assistance
for humans in their everyday life.

Finally 8 the current state of
visual servoing HRI is not a perfectly feasibly for any user. It bias to one or
countable people. The
objective of the study is developing algorithm to detect object then pick it
and hand over to human according to pose of hand. During this process, image
processing detects objects and visual servoing achieve the task movement. I.e.,
controller communicates with the arm and allows moving to the desired position
17 .The solution will be cognitive and precious for existing trade off in
visual servoing and create broad area interdisciplinary education
project obtaining critical analysis based on user oriented design and the
consequences of adopting advanced new technology in visual servoing.

 

                                                                                                                                           
II.    SYSTEM OVERVIEW

The system contains three major
modules. They are Visual Information Extraction Module (VIEM), manipulator
(servo) controller and 3 D.O.F manipulator 16.The output of webcam is used to
extract the position and the orientation of hand which is performed using the
Visual Information Extraction Module (VIEM).

 

A.    Visual Information Extraction Module (VIEM).

Here VIEM is consists with two major
parts. Those are software and hardware. Software part 16 consists with open
CV C++ program and hardware is 5MP webcam (manual focus).   

 

 

Fig. 1:   OverviewVM8 
of Hardware and Software modules of System including opencv installed computer
connected with usb webcam for image extraction. Interaction manager also
included as software platform written in open CV c++ .Interaction manager
decides behavior of gripper with position and posture of human hand with some
fuzzy logic.

 

B.    Manipulation
Manager (IM)

Servo controller (Arduino Mega Board),
servo manipulator and power supply are included in Interaction manager.

 

  I.       VIEM is consists with two major parts. Those are
software and hardware parts. Software part consists with open cv C++ program
and hardware is 5MP webcam (manual focus).Servo controller (Arduino Mega
Board), servo manipulator and power supply are included in Interaction manager.
IM manages the interaction between the human user and the robot16. The data
set from VIEM is fed to IM.Interaction manager (IM) uses these data to
understand the information in user’s commands 16.Action Manager (AM) manages
high level control of the robot and guide Manipulation manager to handle the
placement of the object on the table 16. Low –level control of the
manipulator is handled using the Robot Controller and the manipulator
controller respectively.

                                                                                                                                                                             
I.      
 

                                                                                                                                             
III.   Control Algorithm.

 

 

                   Fig.2  :  System
Control Algorithm

 

After power on the system ,Initially Object(Small Bottle)
has  been detected by Haar classifier and
pick by manipulator9.Then it move to hold position of task.It waits until
hand appears in relavant frame (11cm X 11cm) to deliver the object according to
pose of palm.Palm detection is carried out with convex hull method
efficiently11.But for comfortability in communication with robot controller
classifier has been continued for palm detection15.

 

                                                                                                                      
IV.   Hand Posture and Position  Detection

There is a way that is
able to detect out hands, track them in real time and gesture recognition 10.
It has to be perform with image processing on images obtained from a regular web-camera. It is time consuming coding and threshold
values in the code including the canny filter needs to be fine-tuned 5. This
isn’t well perform with changing background in intensity and color.

A.                                             
A.      Haar Cascade Classifier

 

Haar feature-based object
detection is machine learning fast and accurate method where a cascade function is trained from a lot of positive and negative images
15. It is then used to detect same objects in other images. We have to
collaborate with hand and palm detection. Initially, the algorithm needs a lot
of positive images (images with hand) and negative images (images in absence of
hand) to train the classifier 15. Then we need to extract features from it.
For this, haar features shown in below images are used. Every feature is a
single value obtained by subtracting sum of pixels under white rectangle from
sum of pixels under black rectangle.VM9 

B.     Forward Kinematics of 3 D.O.F Manipulator

 

              Fig.3(a)    : 
Angles and lengths of robot manipulator    

 

 

        Fig.3(b)     :    Mathematical view of robot manipulator

 

Figures 3(a) and Figure 3(b) illustrates mathematical model
parameters of  3  D.O.F manipulator robot arm to calculate
necessary joint angles and  end effector
position.

             
(1)                                                                                    

h=z-                                     (2)

x=                             (3)

y=                             (4)

z=                             (5)

 

C.    Inverse
Kinematics of 3 D.O.F Manipulator

 

h=z-

 

                               
(6)      

 

 

                                                           

 

q1= atan2(y, x)                                                                                      (7)

                                        (8)                                                                              

 

q3= atan2 ()                                                                                (9)

                                                                                               

q2=atan2 (z-)        (10)

 

Equation (7), (9) and (10) to determine the required
joint angles to robot controller as per the present position of end-effector.
In this case   and      .

 

D.    Theoretical
Background on Image Based Visual Servoing .

 

e (t) = s(m(t), a) – s*                                                                              
(11)

The
vector s* contains the
desired values and s contains the actual values of the features 3 4.

E.    IBVS
Control law                                                         
               Vc=-l                                                                (12)   
 Vc = (vc,wc)                                                                         (13)         =                                                             (14)                                                               (15)            
where Le  is of full rank 6.1

Le
= Lx

 Lx= (16) Equation (12) to (16)
describes IBVS control law. At least we need three points for accurate IBVS.

=           (17) 

Velocities,
Vc (vc,wc)==-l  etc.   (18)

Here 1
point is center of rectangle contour and other two are vertices.
Equation (12) to (16) describes IBVS control law. At least we need three points
for accurate IBVS.

 

                                                                                                                                         
V.    Experimental Results   

Obviously if the HAAR classifier-based detection results become
unstable, in my opinion which means the detection is not stable and jumps
around the detecting image. Detection level is based on the quality of classifier. Even there are enough
positive/negative samples, let’s say 5000 positive and 7000 negative samples,
the results should be quite robust already 15. Based on my experiences, I
have used 700 positive hand gesture samples and 1000 negative samples, and the
results seemed sufficient to some extent. . Each palm posture needs different
Xml files (Haar-cascades).This means for one degree resolution of detection
required one haar isn’t practical. But this method can communicate with
manipulator controller easily and accurately 15. Because of that reason this
algorithm is applied here.Convex hull and feature detection scheme has
advantages. Like continuous rotation angle of palm can be calculated, palm
posture identification with contour area analyzing sounds good 12. It has
some defects, those are complex coding, background conditions are effecting on
sensitivity of detection and higher error percentage. It output large data
stream and manipulator controller and servo motors hasn’t capacity to response
that kind of large data amount in that time frame. Considering these results
and conclusions Haar classifier method is chosen for this research

Fig.4 (a)  :
Open Palm Object Robot to Human Handover (Still Image)

 

 

 

Fig.4 (b)  :
Close Palm Object Robot to Human Handover (Still Image)

 

Figures 4(a) and 4(b) explain object
deliver with H-R interaction for two distinguish postures of palm. Its illustrate
hand and gripper postures and object behavior between them.

           

 

Fig.5 (a):  Open Palm Object Robot to Human Handover (Video Stream Frames)

 

 

 

Fig.5 (b): Close
Palm Robot to Human Object Handover (Video Stream Frames)

Figures 5(a) and 5(b)
are frames of videos streams that shows object handover with H-R interaction
for user comfort.They show object detection ,pick object,detect hand and
deliver object to human hand in different manner depend on hand position and
posture.

                                                                                                                                                   
VI.   Simulation

                 Fig.6 (a)                                          Fig.6
(b)

 

 

Fig.6(c),Fig.6 (a) ,
Fig.6(b) illustrates image based visual servoing between two points and
Fig.6(c) end effector linear velocity ,joint angle rates in task steps domain

Simulations
are carried out in ViSP environment. ViSP standing for Visual Servoing Platform
is a modular cross platform library that allows prototyping and developing
applications using visual tracking and visual servoing technics, developed by
Inria Lagadic team
1. ViSP has ability to compute control laws in robotic systems. It has
tracking abilities with real time image processing or computer vision
algorithms 1. It seems that angle rates and velocity are reducing with error
between desired and actual.

                                                                                                                               
VII.  Conclusion and Future Work

We have
developed a user interface for HRI that facilitates semi-autonomous robot
manipulator 8. The user describes versatile high-level actions using visual
task specification. We have conducted experiments illustrate performed actions
with visual servoing. It also proves the system is capable of executing a range
of tasks spanning both coarse and fine manipulation. The visual task
specification system also has some drawbacks. Also choosing the correct
geometric constraints and D.O.F of the robot is not always straightforward.

Although the current state of the
system is not a perfectly feasible system for any user, it has to be developed
step forward of better human robot interaction with visual servoing. This
research focuses on analyzing human arm postures based on human hand
characteristics (e.g., palm posture, position). There should be more sensitive
and accurate schemes to detect the hand posture and position preciously. The kinet
sensor may brought advantage for those drawbacks in webcam. DC servo motors are
not 100% accurate position drivers. They have lot of errors while operating and
coding. Stepper or DC gear motors with optical encoders be better for position
drive problem.

REFERENCES

          
1.     E.
Marchand, F. Spindler, F. Chaumette. ViSP for visual servoing:  a
generic software platform with a wide class of robot control skills.
IEEE Robotics and Automation Magazine, Special Issue on “Software Packages for
Vision-Based Control of Motion”, P. Oh, D. Burschka (Eds.), 12(4):40-52,
December 2005.

          
2.     E.
Marchand. ViSP: A Software Environment for Eye-in-Hand    Visual
Servoing. In IEEE Int. Conf. on Robotics and Automation,
ICRA’99, Volume 4, Pages 3224-3229, Detroit, Michigan, Mai 1999.

          
3.     F. Chaumette,
S. Hutchinson. Visual servo control, Part I: Basic approaches. IEEE Robotics and Automation Magazine, 13(4): pages
82-90, December 2006. 

          
4.     F. Chaumette,
S. Hutchinson. Visual servo control, Part II:  Advanced approaches. IEEE Robotics and Automation Magazine, 14(1): pages
109-118, March 2007. 

          
5.     D. Kuang, C.
Yang. Wang, G. Peng: An Improved Approach for     Gesture
Recognition, Chinese Automation
Congress (CAC), pages 4856-4861, October 2017.

          
6.     B.
Espiau, F. Chaumette, P. Rives. A new approach to visual   servoing in robotics. IEEE Trans. on Robotics and Automation, 8(3): pages
313-326, June 1992. 

          
7.     A.J. Sanchez
and J.M. Martinez, Robot-arm Pick and Place Behavior Programming System Using
Visual Perception, Proceedings 15th International
Conference on Pattern Recognition, pages 507-510,September 2000.

       
8.       
M. Gridseth, O. Ramirez, C.P. Quintero and M.
Jagersand,ViTa:  Visual Task
Specification Interface for Manipulation with Uncalibrated Visual Servoing, 2016 IEEE  International Conference on Robotics and
Automation (ICRA),pages 3434-3440,May 2016

       
9.       
E. Marchand, F. Chaumette. Feature tracking for visual servoing purposes. Robotics and Autonomous Systems, Special issue on Advances in Robot Vision, D.  Kraig, H. Christensen (Eds.), 52(1):
pages 53-70, July 2005. 

       10.      A.
Dame, E. Marchand. Video mosaicing using a Mutual
Information-based Motion Estimation Process. In IEEE Int. Conf. on Image Processing, ICIP’11, Pages
1525-1528, Bruxelles, Belgique, September 2011. 

       11.      A.
Dame, E. Marchand. Accurate real-time tracking using
  mutual information. In IEEE Int. Symp. On Mixed and Augmented Reality, ISMAR’10,
Pages 47-56, Seoul, Korea, and October 2010.

       12.      R.M. Gurav,
P.K. Kadbe,Real time Finger Tracking and                    Contour Detection for
Gesture Recognition using OpenCV, 
International Conference on Industrial Instrumentation and Control
(ICIC),pages 974-977,May 2015

       13.      M.F. Zaman,
S.T. Monserrat , F.I  and D.
Karmaker,Real- Time Hand Detection and Tracking with Depth Values, Proceedings
of  3rd International Conference on
Advances in Electrical Engineering,pages129-132,Dhaka,Bangladesh,December 2015.

       14.      I. Hussain,
A.K Talukdar, K.K Sarma, Hand Gesture Recognition System with Real-Time Palm
Tracking, Annual IEEE India Conference (INDICON), India, pages 1-6, December
2014.

       15.      G. Mao, Y.W.M
Hor,C.Y. Tang, Real-Time Hand Detection and Tracking against Complex
Background, Fifth International Conference on Intelligent Information Hiding
and Multimedia Signal Processing, pages 906-908,Kyoto,Japan,November 2009

       16.      P.H.D.Arjuna,S.
Srimal and A.G.Buddhika P.Jayasekara, A Multi-modal Approach for Enhancing
Object Placement,6th National Conference on Technology and
Management(NCTM),pages 17-22,Malabe Sri lanka,January 2017

       17.      H. Wu, T.T
Andersen, N.A Andersen, O. Ravn, Visual  Servoing
for Object Manipulation: A Case Study in laughterhouse,14th
International Conference on Control,Automation,Robotics &
Vision,Phuket,Thailand,November 2016

       18.   https://www.youtube.com/watch?v=6yB5pQm4s_c

 

 

 VM1Include
a paragraph describing these points. One ot two sentences for each point.

 MB2

 MB3

 VM4Include
2/3 sentences describing the importance of visual servoing for object
manipulation applications

 VM5

 MB6

 MB7

 VM8There
should be feature databases in the system for storing the harr classifier etc

 VM9Not
clear

What is the used method for hand posture and position
detection ?

Convex hull method or Haar classifier or both