25.01.2014 Views

Scheduler in Cloud Computing using Open Source Technologies

Scheduler in Cloud Computing using Open Source Technologies

Scheduler in Cloud Computing using Open Source Technologies

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Darshan Upadhyay et al ,Int.J.Computer Technology & Applications,Vol 3 (3), 1093-1098<br />

ISSN:2229-6093<br />

<strong>Scheduler</strong> <strong>in</strong> <strong>Cloud</strong> Comput<strong>in</strong>g us<strong>in</strong>g <strong>Open</strong> <strong>Source</strong> <strong>Technologies</strong><br />

Darshan Upadhyay<br />

Prof. Chirag Patel<br />

Student of M.E.I.T<br />

Asst. Prof. Computer Department<br />

S. S. Eng<strong>in</strong>eer<strong>in</strong>g College, Bhavnagar L. D. College of Eng<strong>in</strong>eer<strong>in</strong>g, Ahemdabad<br />

Gujarat Technological University Gujarat Technological University<br />

darshanit7@gmail.com<br />

chirag.email@yahoo.com<br />

Abstract<br />

<strong>Cloud</strong> Comput<strong>in</strong>g utilities are com<strong>in</strong>g to be<br />

omnipresent, and then are beg<strong>in</strong>n<strong>in</strong>g to serve as the<br />

essential root of comput<strong>in</strong>g capacity for both<br />

undertak<strong>in</strong>gs and private comput<strong>in</strong>g requisitions. Any<br />

request comes to cloud will be provided by cloud <strong>in</strong><br />

terms of Virtual Mach<strong>in</strong>e. On which basis, Virtual<br />

Mach<strong>in</strong>e will be allocated to particular host will be<br />

decided by <strong>Scheduler</strong>. We make an effort to establish<br />

private cloud us<strong>in</strong>g <strong>Open</strong>Nebula – an open source<br />

technology to establish cloud, and carried out tests<br />

regard<strong>in</strong>g how scheduler behave with different<br />

requests.<br />

1. Introduction<br />

The flexibility associated with fog up comput<strong>in</strong>g has<br />

its orig<strong>in</strong> with<strong>in</strong>side the mix of virtualization<br />

technologies along together us<strong>in</strong>g net providers. A<br />

def<strong>in</strong>ition is given <strong>in</strong> [1]: ”Build<strong>in</strong>g on compute and<br />

storage virtualization, and leverag<strong>in</strong>g the modern<br />

Web, <strong>Cloud</strong> Comput<strong>in</strong>g provides scalable, networkcentric,<br />

abstracted IT <strong>in</strong>frastructure, platforms, and<br />

applications as on-demand services that are billed by<br />

consumption.” <strong>Cloud</strong> Comput<strong>in</strong>g is def<strong>in</strong>ed as a pool<br />

of virtualized computer resources. Based on this<br />

Virtualization the <strong>Cloud</strong> Comput<strong>in</strong>g paradigm allows<br />

workloads to be deployed and scaled-out quickly<br />

through the rapid provision<strong>in</strong>g of virtual mach<strong>in</strong>es or<br />

physical mach<strong>in</strong>es. Any request of resources will be<br />

delivered by <strong>Cloud</strong> <strong>in</strong> terms of Virtual Mach<strong>in</strong>e. So<br />

placement of Virtual mach<strong>in</strong>e is most important part<br />

<strong>in</strong> <strong>Cloud</strong> Comput<strong>in</strong>g. Resource Management is<br />

necessity <strong>in</strong> <strong>Cloud</strong> Comput<strong>in</strong>g. Because now a day’s<br />

mult<strong>in</strong>ational company have large number of<br />

resources. And us<strong>in</strong>g <strong>Cloud</strong> Comput<strong>in</strong>g through<br />

resource Management, we can manage those<br />

resources efficiently. Through which, we can assure<br />

effective use of resources, provides scalability and<br />

elasticity. The large scalability possibilities offered<br />

by <strong>Cloud</strong> platforms can be harnessed not only for<br />

services and applications host<strong>in</strong>g but also as a raw<br />

on-demand comput<strong>in</strong>g resource[2]. Ultimately,<br />

Service Providers are under pressure to architect their<br />

<strong>in</strong>frastructure to enable real-time end to end visibility<br />

and dynamic resource management with f<strong>in</strong>e gra<strong>in</strong>ed<br />

control to reduce total cost of ownership while also<br />

improv<strong>in</strong>g agility. Aga<strong>in</strong> cloud comput<strong>in</strong>g is def<strong>in</strong>ed<br />

as a pooled of virtualized computer resources. So, to<br />

def<strong>in</strong>e effective VM placement policy[3] is necessary<br />

for dynamic resource management.<br />

2. <strong>Open</strong>Nebula – <strong>Open</strong> <strong>Source</strong> technology to<br />

build <strong>Cloud</strong><br />

<strong>Open</strong>Nebula was first established as a research<br />

project back <strong>in</strong> 2005 by lgnacio M. Llorente and<br />

Ruben S. Montero, releas<strong>in</strong>g the first version of the<br />

toolkit and cont<strong>in</strong>u<strong>in</strong>g as an open source project <strong>in</strong><br />

march 2008[4]. <strong>Open</strong>Nebula is one of the key<br />

technologies of reservoir plan and the flagship<br />

research project <strong>in</strong> virtualization <strong>in</strong>frastructure and<br />

cloud comput<strong>in</strong>g of European Union. Like nimbus,<br />

<strong>Open</strong>Nebula is also an open source cloud service<br />

framework [4]. It allows user deploy and manage<br />

virtual mach<strong>in</strong>es on physical resources and it can set<br />

user’s data centers or clusters to flexible virtual<br />

<strong>in</strong>frastructure that can automatically adapt to the<br />

change of the service load. The ma<strong>in</strong> difference of<br />

<strong>Open</strong>Nebula and nimbus is that nimbus implements<br />

remote <strong>in</strong>terface based on EC2 or WSRF through<br />

which user can process all security related issues,<br />

while <strong>Open</strong>Nebula does not. Us<strong>in</strong>g opennebula we<br />

IJCTA | MAY-JUNE 2012<br />

Available onl<strong>in</strong>e@www.ijcta.com<br />

1093


Darshan Upadhyay et al ,Int.J.Computer Technology & Applications,Vol 3 (3), 1093-1098<br />

ISSN:2229-6093<br />

can establish public cloud, private cloud and hybrid<br />

cloud. <strong>Open</strong>Nebula also allow to work with exist<strong>in</strong>g<br />

system or external modules. <strong>Open</strong>Nebula can also<br />

work with an open source technology Haizea –<br />

resource scheduler tool.<br />

The Match-mak<strong>in</strong>g algorithm as described <strong>in</strong> [7]<br />

allocates resources with a higher RANK expression<br />

first to allocate VMs. This RANK expression is<br />

important <strong>in</strong> apply<strong>in</strong>g placement policies like<br />

Pack<strong>in</strong>g, Strip<strong>in</strong>g and Load-aware policy. Pack<strong>in</strong>g<br />

policy m<strong>in</strong>imizes the number of cluster nodes <strong>in</strong> use<br />

by us<strong>in</strong>g those nodes with more VMs runn<strong>in</strong>g first.<br />

Strip<strong>in</strong>g policy maximizes the resources available to<br />

VMs <strong>in</strong> a node by us<strong>in</strong>g those nodes with less VMs<br />

runn<strong>in</strong>g first while Load-aware policy do the same<br />

job by us<strong>in</strong>g those nodes with more free CPU first.<br />

Fig. 2 shows the comparison of various toolkits on<br />

the basis of Virtual Mach<strong>in</strong>e placement policy.<br />

<strong>Cloud</strong><br />

toolkits<br />

Amazon<br />

EC2<br />

Nimbus<br />

VM placement<br />

policies<br />

Proprietary<br />

Round rob<strong>in</strong> and<br />

static greedy<br />

Support for<br />

hybrid<br />

cloud<br />

No<br />

Yes<br />

Fig. 1 <strong>Open</strong>Nebula Architecture<br />

By default, <strong>Open</strong>Nebula comes with match mak<strong>in</strong>g<br />

scheduler. You can also work with external scheduler<br />

like Haizea with <strong>Open</strong>Nebula. The toolkit <strong>in</strong>cludes<br />

features for <strong>in</strong>tegration, management, scalability,<br />

security and account<strong>in</strong>g. It also emphasizes<br />

standardization, <strong>in</strong>teroperability and portability,<br />

provid<strong>in</strong>g cloud users and adm<strong>in</strong>istrators with a<br />

choice of several cloud <strong>in</strong>terfaces (EC2 Query, OGF<br />

OCCI and v<strong>Cloud</strong>) and hypervisors (Xen, KVM and<br />

VMware), and a flexible architecture that can<br />

accommodate multiple hardware and software<br />

comb<strong>in</strong>ations <strong>in</strong> a data center[3].<br />

As shown <strong>in</strong> fig.1, <strong>Open</strong>Nebula architecture can be<br />

divided <strong>in</strong>to three layers: 1. tools – developed us<strong>in</strong>g<br />

<strong>in</strong>terfaces provided by the <strong>Open</strong>Nebula core. 2. core<br />

– which is the ma<strong>in</strong> part of <strong>Open</strong>Nebula architecture,<br />

consists like virtual mach<strong>in</strong>e(VM), virtual<br />

network(VN) and host management components. 3.<br />

drivers – useful for the support to different<br />

virtualization and technologies like monitor<strong>in</strong>g.<br />

3. <strong>Scheduler</strong> <strong>in</strong> <strong>Cloud</strong> Comput<strong>in</strong>g<br />

On which basis Virtual Mach<strong>in</strong>e will be allocated to<br />

particular host will be decided by <strong>Scheduler</strong> us<strong>in</strong>g<br />

various policies. By default, <strong>Open</strong>Nebula Comes<br />

with match mak<strong>in</strong>g <strong>Scheduler</strong>[4][6]. <strong>Open</strong>Nebula<br />

uses only Immediate lease provision<strong>in</strong>g to schedule<br />

IaaS cloud resources us<strong>in</strong>g Match-mak<strong>in</strong>g Algorithm.<br />

Eucalyptus<br />

Static greedy and<br />

round rob<strong>in</strong><br />

<strong>Open</strong>Nebula Match Mak<strong>in</strong>g -<br />

Initial placement<br />

based on rank policy<br />

<strong>Open</strong>Nebula<br />

and Haizea<br />

Dynamic placement<br />

to support advance<br />

reservation leases<br />

No<br />

Yes<br />

Yes<br />

Fig. 2 Comparison of various cloud toolkits one the<br />

basis of VM placement policy.<br />

4. Experiments<br />

To build a private cloud, i have used <strong>Open</strong>Nebula 3.0<br />

– an open source technology and CentOS 5.5 as an<br />

operat<strong>in</strong>g system.<br />

Experiment – 1<br />

A. Experiment goal<br />

Goal of this experiment is to visualize different states<br />

of VM and analyze behavior of scheduler.<br />

B. Experiment setup<br />

For this experiment, it is necessary to have one host<br />

connected with <strong>Cloud</strong> Front-end. You also need the<br />

operat<strong>in</strong>g system’s image which you want to run <strong>in</strong><br />

IJCTA | MAY-JUNE 2012<br />

Available onl<strong>in</strong>e@www.ijcta.com<br />

1094


Darshan Upadhyay et al ,Int.J.Computer Technology & Applications,Vol 3 (3), 1093-1098<br />

ISSN:2229-6093<br />

Virtual Mach<strong>in</strong>e as os. Secondly, you have to prepare<br />

Virtual mach<strong>in</strong>e’s template file. A template file<br />

consists of a set of attributes that def<strong>in</strong>es a Virtual<br />

Mach<strong>in</strong>e. For operat<strong>in</strong>g system of Virtual Mach<strong>in</strong>e,<br />

there are 2 – ways to def<strong>in</strong>e it. 1) You can use image<br />

template file(same as Virtual Mach<strong>in</strong>e’s template)<br />

which consists of a set of attributes that def<strong>in</strong>es a<br />

image and then us<strong>in</strong>g oneimage command you can<br />

register the image <strong>in</strong> <strong>Open</strong>Nebula. From now<br />

onwards, you can use this image by image id or by<br />

image name. 2) You can use image directly <strong>in</strong> Virtual<br />

Mach<strong>in</strong>e’s template file and set necessary attributes.<br />

In this experiment two mach<strong>in</strong>es were used. Their<br />

hardware details and software details are given <strong>in</strong> fig.<br />

3. Host and <strong>Cloud</strong> Front-end are connected.<br />

CentOS 5.5<br />

IP 192.169.1.16<br />

Virtual Mach<strong>in</strong>e 3<br />

CentOS 5.5<br />

IP 192.169.1.15<br />

Virtual Mach<strong>in</strong>e 2<br />

CentOS 5.5<br />

IP 192.169.1.14<br />

Virtual Mach<strong>in</strong>e 1<br />

CentOS 5.5<br />

Lenovo core 2 duo +<br />

1 GB RAM<br />

IP 192.169.1.11<br />

<strong>Cloud</strong> Front-end<br />

Fig. 3 Hardware/Software setup<br />

CentOS 5.5<br />

Lenovo core 2 duo +<br />

1 GB RAM<br />

IP 192.169.1.10<br />

Host1<br />

C. Experiment method & Result<br />

In this experiment, we can see the different states of<br />

Virtual Mach<strong>in</strong>es with respect to time. At same time,<br />

procedure of VM creation will be started, but from<br />

the graph fig.5 we can show that all Virtual Mach<strong>in</strong>es<br />

time of “Active” state is different and time of<br />

“runn<strong>in</strong>g” state of VM is almost same. Here<br />

scheduler will try to discover all possible hosts, but<br />

due to only one host scheduler will allocate all VM to<br />

host1. When we create VM it will follow the VM<br />

life-cycle.<br />

States/VM VM - 1 VM - 2 VM - 3<br />

ACTIVE 0.00 0.30 1.00<br />

PROLOG 0.00 0.30 1.00<br />

BOOT 2.17 2.18 3.31<br />

RUNNING 3.30 3.33 3.37<br />

Fig. 4 Table shows time with respect to different<br />

states<br />

Fig. 5 Graph shows all VM with respect to different<br />

states & time<br />

So, we can conclude from this experiment that by<br />

default, <strong>Open</strong>Nebula’s scheduler works like First<br />

Come First Served.<br />

Experiment – 2<br />

A. Experiment goal<br />

Goal of this experiment is how many VM can be<br />

placed on s<strong>in</strong>gle host and what scheduler will do after<br />

host doesn’t have enough resources to run VM.<br />

B. Experiment setup<br />

For this experiment, we have taken one host<br />

connected with <strong>Cloud</strong> Front end. We will create<br />

fifteen Virtual Mach<strong>in</strong>e us<strong>in</strong>g <strong>Cloud</strong> Front end.<br />

Procedure for creat<strong>in</strong>g Virtual Mach<strong>in</strong>e will rema<strong>in</strong><br />

same as mention <strong>in</strong> previous experiment. The<br />

hardware and software details are as shown <strong>in</strong> fig. 6.<br />

Fig. 6 Hardware/Software setup<br />

IJCTA | MAY-JUNE 2012<br />

Available onl<strong>in</strong>e@www.ijcta.com<br />

1095


Darshan Upadhyay et al ,Int.J.Computer Technology & Applications,Vol 3 (3), 1093-1098<br />

ISSN:2229-6093<br />

C. Experiment Method & Result<br />

In this experiment first we had created two Virtual<br />

Mach<strong>in</strong>es which will be allocated to host1 by<br />

scheduler. Each and every time scheduler will try to<br />

f<strong>in</strong>d whether the RANK has been def<strong>in</strong>ed <strong>in</strong><br />

particular VM or not. In this experiment, we haven’t<br />

def<strong>in</strong>ed any RANK <strong>in</strong> any VM. Then after we had<br />

created thirteen Virtual Mach<strong>in</strong>es from which, aga<strong>in</strong><br />

two Virtual Mach<strong>in</strong>es will be allocated to host1.<br />

Rema<strong>in</strong><strong>in</strong>g eleven Virtual Mach<strong>in</strong>es will be <strong>in</strong><br />

pend<strong>in</strong>g states due to less resources(<strong>in</strong>sufficient<br />

memory) as shown <strong>in</strong> fig. 7.<br />

Stage<br />

Pend<strong>in</strong>g<br />

VM<br />

queue<br />

Status<br />

Total<br />

VM<br />

Initial 0 - 0<br />

After<br />

execut<strong>in</strong>g<br />

command to<br />

create VM<br />

twice<br />

0 VM Allocated<br />

to host<br />

2<br />

After<br />

execut<strong>in</strong>g<br />

command to<br />

create VM<br />

twice<br />

After<br />

execut<strong>in</strong>g<br />

command to<br />

create VM,<br />

eleven<br />

times<br />

After<br />

deletion of<br />

first two<br />

runn<strong>in</strong>g VM<br />

0 VM Allocated<br />

to host<br />

11 Host filtered<br />

out due to less<br />

resources to run<br />

VM, all eleven<br />

VM rema<strong>in</strong>s <strong>in</strong><br />

pend<strong>in</strong>g VM<br />

queue<br />

9 Due to deletion<br />

of two runn<strong>in</strong>g<br />

VM, Now host<br />

have capacity<br />

to run two new<br />

VM. First, 2-<br />

VM from<br />

pend<strong>in</strong>g queue<br />

will be<br />

allocated to<br />

host<br />

Fig. 7 <strong>Scheduler</strong> log(auto generated)<br />

Now, we have deleted first 2-runn<strong>in</strong>g Virtual<br />

Mach<strong>in</strong>es. So, aga<strong>in</strong> host1 have capacity to run two<br />

more Virtual Mach<strong>in</strong>es. So from the pend<strong>in</strong>g Virtual<br />

Mach<strong>in</strong>e queue first 2-VM will be allocated to host1.<br />

And rema<strong>in</strong><strong>in</strong>g VM will be <strong>in</strong> pend<strong>in</strong>g queue. Fig. 8<br />

shows the graph of host1’s CPU, host1’s MEMORY<br />

and total VM. This graph is generated by sunstone<br />

which provides the GUI to <strong>Open</strong>Nebula’s cloud.<br />

4<br />

15<br />

13<br />

Fig. 8 Sunstone’s graph for Host1’s CPU, Memory<br />

and Total VM<br />

So, from this experiment we can conclude that,<br />

<strong>Open</strong>Nebula’s scheduler will filtered out the host if it<br />

doesn’t have enough capacity to run more VM. And<br />

Virtual Mach<strong>in</strong>e will rema<strong>in</strong> <strong>in</strong> pend<strong>in</strong>g Virtual<br />

Mach<strong>in</strong>e queue if there is no host available.<br />

Experiment – 3<br />

A. Experiment goal<br />

The goal of this experiment to analyze that how<br />

scheduler will work when there are more than one<br />

host.<br />

B. Experiment setup<br />

For this experiment, we have connected three hosts<br />

with <strong>Cloud</strong> Front end. Procedure to create VM is<br />

same as mentioned <strong>in</strong> Experiment – 1. The<br />

hardware/software setup is shown <strong>in</strong> fig. 9.<br />

Fig. 9 Hardware/Software setup<br />

IJCTA | MAY-JUNE 2012<br />

Available onl<strong>in</strong>e@www.ijcta.com<br />

1096


Darshan Upadhyay et al ,Int.J.Computer Technology & Applications,Vol 3 (3), 1093-1098<br />

ISSN:2229-6093<br />

C. Experiment Method & Result<br />

In this experiment, we have created three VM. And<br />

we have three host connected with <strong>Cloud</strong> Front end.<br />

And to see how scheduler will allocate hosts to VM.<br />

Whether scheduler will allocate all VMs to one host<br />

only? Or scheduler will allocate 2 VMs to one host<br />

and 1 VM to another host and one host rema<strong>in</strong>s<br />

empty? Or scheduler will allocate 1 VM to each host?<br />

Thu Mar 15 13:37:41 2012 [HOST][D]: Discovered<br />

Hosts (enabled): 13 15 17<br />

Thu Mar 15 13:37:41 2012 [VM][D]: Pend<strong>in</strong>g<br />

virtual mach<strong>in</strong>es : 104 105 106<br />

Thu Mar 15 13:37:41 2012 [RANK][W]: No rank<br />

def<strong>in</strong>ed for VM<br />

Thu Mar 15 13:37:41 2012 [RANK][W]: No rank<br />

def<strong>in</strong>ed for VM<br />

Thu Mar 15 13:37:41 2012 [RANK][W]: No rank<br />

def<strong>in</strong>ed for VM<br />

Thu Mar 15 13:37:41 2012 [SCHED][I]: Select<br />

hosts<br />

PRI HID<br />

-------------------<br />

Virtual Mach<strong>in</strong>e: 104<br />

0 17<br />

0 15<br />

0 13<br />

Virtual Mach<strong>in</strong>e: 105<br />

0 17<br />

0 15<br />

0 13<br />

Virtual Mach<strong>in</strong>e: 106<br />

0 17<br />

0 15<br />

0 13<br />

Thu Mar 15 13:37:41 2012 [VM][I]: Dispatch<strong>in</strong>g<br />

virtual mach<strong>in</strong>e 104 to HID: 17<br />

Thu Mar 15 13:37:41 2012 [VM][I]: Dispatch<strong>in</strong>g<br />

virtual mach<strong>in</strong>e 105 to HID: 15<br />

Thu Mar 15 13:37:41 2012 [VM][I]: Dispatch<strong>in</strong>g<br />

virtual mach<strong>in</strong>e 106 to HID: 13<br />

Fig. 10 <strong>Scheduler</strong> log<br />

Here, as shown <strong>in</strong> fig. 10 <strong>Scheduler</strong> had allocated one<br />

VM to each host. So we can say that scheduler<br />

distribute VM equally between hosts when there are<br />

multiple hosts are available.<br />

Experiment – 4<br />

A. Experiment goal<br />

The goal of this experiment is to implement rank<br />

policy and f<strong>in</strong>d out how scheduler will allocate host<br />

to VMs on basis of rank.<br />

B. Experiment setup<br />

For this experiment, we have connected three hosts<br />

with <strong>Cloud</strong> Front end. In this experiment we had<br />

created six VM. All VM created one after another,<br />

not at a time. Procedure to create VM is same as<br />

mentioned <strong>in</strong> Experiment – 1. The hardware/<br />

software setup is shown <strong>in</strong> fig. 11.<br />

C. Experiment method<br />

In this experiment, we had created VM one by one.<br />

And <strong>in</strong> all VM we had specify “ – FREEMEMORY”<br />

as RANK. <strong>Scheduler</strong> will sort all hosts accord<strong>in</strong>g to<br />

rank and set priority of all hosts to particular VM. So,<br />

scheduler will allocate VM to those hosts first whose<br />

free memory is less. When scheduler allocated one<br />

VM to particular host, scheduler will update the data<br />

of all hosts before allocat<strong>in</strong>g another VM to<br />

particular VM.<br />

Fig. 11 Hardware/Software setup<br />

Fig. 12 shows the <strong>in</strong>itial free memory of each three<br />

hosts.<br />

Fig. 12 three hosts and their free memory<br />

From above figure we can say that, new VM will be<br />

allocate to darshh2. Now, we had created one VM,<br />

scheduler had allocated this VM to darshh2, aga<strong>in</strong><br />

scheduler will pool all <strong>in</strong>formation related to host and<br />

f<strong>in</strong>d out the host with less free memory and assign<br />

new VM to that particular host. Fig. 13 shows the<br />

states of all VM with respect to time. Fig. 14 shows<br />

the free memory of hosts with respect to time. Fig. 15<br />

& 16 shows the graph which <strong>in</strong>dicates when VM<br />

allocated to particular host, free memory of that<br />

particular host will decrease.<br />

IJCTA | MAY-JUNE 2012<br />

Available onl<strong>in</strong>e@www.ijcta.com<br />

1097


Darshan Upadhyay et al ,Int.J.Computer Technology & Applications,Vol 3 (3), 1093-1098<br />

ISSN:2229-6093<br />

State 108 110 111 112 114 115<br />

s/VM<br />

Initia 0.00 6.30 12.30 15.30 18.30 21.30<br />

l<br />

ACT 0.00 6.30 12.30 15.30 18.30 21.30<br />

IVE<br />

PRO 0.00 6.30 12.30 15.30 18.30 21.30<br />

LOG<br />

BOO 1.20 7.46 12.57 15.59 19.02 22.20<br />

T<br />

RUN 1.26 7.56 13.45 16.38 19.42 22.40<br />

NIN<br />

G<br />

Fig. 13 VM states with respect to time<br />

M<strong>in</strong>ute/Host darshhost darshh1 darshh2<br />

0 296 149 113<br />

5 296 149 54<br />

10 296 149 1<br />

15 296 89 1<br />

20 236 30 1<br />

25 177 30 1<br />

Fig. 14 host free memory(<strong>in</strong> MB) with respect to time<br />

Fig. 15 States Vs Time<br />

So, from this experiment we can conclude that<br />

scheduler will successfully work with rank policy.<br />

Accord<strong>in</strong>g to Rank policy scheduler will sort all hosts<br />

and set priorities of hosts for particular VM.<br />

5. Conclusion & Future work<br />

From above experiments we can conclude that<br />

scheduler is the most important th<strong>in</strong>g <strong>in</strong> cloud which<br />

works on various policies and on the basis of that<br />

scheduler allocate VM to particular host.<br />

In these match mak<strong>in</strong>g scheduler there is no rank to<br />

give priority to particular VM or there is no way to<br />

give priority to particular VM. So the future direction<br />

will be to improve this match mak<strong>in</strong>g scheduler and<br />

def<strong>in</strong>ed new rank through which we can give priority<br />

to VM.<br />

6. References<br />

[1] Patrícia Takako Endo, Glauco Estácio Gonçalves,<br />

Judith Kelner, Djamel Sadok, “A A Survey on <strong>Open</strong>source<br />

<strong>Cloud</strong> Comput<strong>in</strong>g Solutions”, VIII workshop<br />

on cloud.<br />

[2] DSA – Research, “The <strong>Open</strong>-<strong>Source</strong> Toolkit for<br />

build<strong>in</strong>g cloud <strong>in</strong>frastructure NEBULA”, July –<br />

2009.<br />

[3] B. Sotomayor, R. S. Montero, I. M. Llorente, I.<br />

Foster. "Virtual Infrastructure Management <strong>in</strong> Private<br />

and Hybrid <strong>Cloud</strong>s", IEEE Internet Comput<strong>in</strong>g, vol.<br />

13, no. 5, pp. 14-22, Sep./Oct. 2009.<br />

[4] Vivek Shrivastava, D.S. Bhilare, “Algorithms to<br />

Improve Resource Utilization and Request<br />

Acceptance rate <strong>in</strong> IaaS <strong>Cloud</strong> Schedul<strong>in</strong>g”,<br />

International Journal Advanced network<strong>in</strong>g and<br />

Application, vol. 3, issue 5, pp 1367-1374, 2012.<br />

[5] <strong>Open</strong>Nebula Pro, <strong>Open</strong>NebulaPro White Paper,<br />

Rev20110126,https://support.opennebula.pro/attachm<br />

ents/token/coiuzlpxct7oyvq/?name=<strong>Open</strong>NebulaPro_<br />

White_Paper_Rev20110126.pdf.<br />

[6] <strong>Open</strong>Nebula Virtual Mach<strong>in</strong>e,http://opennebula.<br />

org/documentation:rel2.2:vm_guide..<br />

[7] <strong>Open</strong>Nebula <strong>Scheduler</strong>, http://opennebula.org/<br />

documentation:archives:rel2.0:schg.<br />

Fig. 16 Free Memory Vs Time<br />

IJCTA | MAY-JUNE 2012<br />

Available onl<strong>in</strong>e@www.ijcta.com<br />

1098

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!