IJCST Logo

 

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY (IJCST)-VOL III ISSUE III, VER. 4, JULY TO SEPTEMBER, 2012


International Journal of Computer Science and Technology Vol. 3 Issue 3, Ver. 4
S.No. Research Topic Paper ID
   137 Maximality Semantics and Petri Nets

Nabil Belala, Djamel Eddine Saïdouni, Messaouda Bouneb, Jean-Michel Ilié

Abstract

Maximality-based labeled transition system model is a model which expresses, in a natural way, maximality semantics of concurrent systems. In this paper, we present intuitively and motivate the maximality semantics. Moreover, we present some results around expressing true-concurrent semantics of P/T Petri nets in terms of maximality-based labeled transition systems. First, an operational semantics for P/T Petri nets allowing interpreting a Petri net in terms of maximality- based labeled transition system is proposed. As effect, bisimulation relations of maximality, defined on maximality-based labeled transition systems, are extended to Petri nets. Then, we explore results of two interesting reduction techniques, namely the aggregation of transitions and reduction based on α-equivalence relation.
Full Paper

IJCST/33/4/
A-989
   138 Fuzzy Keyword Search Over Encrypted Data in Cloud Computing

D. Prasad, C. Santhosh, S. V. Hemanth, N. Thirupathi

Abstract

As Cloud Computing becomes prevalent, more and more sensitive information are being centralized into the cloud. For the protection of data privacy, sensitive data usually have to be encrypted before outsourcing, which makes effective data utilization a very challenging task. Although traditional searchable encryption schemes allow a user to securely search over encrypted data through keywords and selectively retrieve files of interest, these techniques support only exact keyword search. That is, there is no tolerance of minor typos and format inconsistencies which, on the other hand, are typical user searching behavior and happen very frequently. This significant drawback makes existing techniques unsuitable in Cloud Computing as it greatly affects system usability, rendering user searching experiences very frustrating and system efficacy very low. In this paper, for the first time we formalize and solve the problem of effective fuzzy keyword search over encrypted cloud data while maintaining keyword privacy. Fuzzy keyword search greatly enhances system usability by returning the matching files when users’ searching inputs exactly match the predefined keywords or the closest possible matching files based on keyword similarity semantics, when exact match fails. In our solution, we exploit edit distance to quantify keywords similarity and develop an advanced technique on constructing fuzzy keyword sets, which greatly reduces the storage and representation overheads. Through rigorous security analysis, we show that our proposed solution is secure and privacy-preserving, while correctly realizing the goal of fuzzy keyword search.
Full Paper

IJCST/33/4/
A-990
   139 A Perfect Evaluation and Measurement of Software Process Development: A Systematic Literature Review

D. Prasad, Ahmedi Nasreen, S. V. Hemanth

Abstract

BACKGROUND—Software Process Improvement (SPI) is a systematic approach to increase the efficiency and effectiveness of a software development organization and to enhance software products. OBJECTIVE—This paper aims to identify and characterize evaluation strategies and measurements used to assess the impact of different SPI initiatives. METHOD—The systematic literature review includes 148 papers published between 1991 and 2008. The selected papers were classified according to SPI initiative, applied evaluation strategies and measurement perspectives. Potential confounding factors interfering with the evaluation of the improvement effort were assessed. RESULTS— Seven distinct evaluation strategies were identified, whereas the most common one, “Pre-Post Comparison”, was applied in 49% of the inspected papers. Quality was the most measured attribute (62%), followed by Cost (41%) and Schedule (18%). Looking at measurement perspectives, “Project” represents the majority with 66%. CONCLUSION— The evaluation validity of SPI initiatives is challenged by the scarce consideration of potential confounding factors, particularly given that “Pre-Post Comparison” was identified as the most common evaluation strategy, and the inaccurate descriptions of the evaluation context. Measurements to assess the short and mid-term impact of SPI initiatives prevail, whereas long-term measurements in terms of customer satisfaction and return on investment tend to be less used.
Full Paper

IJCST/33/4/
A-991
   140 A New Data Storage Security Paradigm for Cloud Computing Using TPA

T. Nirupama, Y. Chittibabu, Dr. P. Harini

Abstract

Cloud computing refers to the delivery of computing and storage capacity as a service to a heterogeneous community of endrecipients. Cloud Computing is the long vision of computing as a utility, in this users can remotely store their data into the cloud, as to acquire the high quality services as well as applications from a shared pool of configurable computing resources. By this cloud computing data will be high secure & cannot be modified without proper authorization , by these cloud users cannot be maintain a separate database. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Here we are giving a security to our data by third party auditor (VerifyProof to audit the proof from cloud server). By introducing an effective third party auditor (TPA), the two basic requirements have to be met: 1) The third party auditing process should bring in no new vulnerabilities towards user data privacy. 2) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user. By utilizing public key based linear verification with arbitrary masking privacy preserving public auditing can be achieved. The technique of bilinear aggregate signature is used to achieve batch auditing. Batch auditing reduces the computation overhead. As the data in the cloud is used by many industries, modification of data cannot be avoided. Here we use the public key based Linear Verification to achieve the privacy-preserving public cloud data auditing system which will utilize the above requirements. As the data in the cloud is used by many industries, modification of data cannot be avoided.
Full Paper

IJCST/33/4/
A-992
   141 Improvising Cloud Performance using I/O Optimization and Mashup’s

Balaji Bhanu Battu

Abstract

In recent years, technologies have been introduced offering a large amount of computing and networking resources. Cloud computing is an emerging technology that access remote servers through Internet to maintain data and applications. It incorporates the advantages of grid and utility computing. Cloud computing is a new infrastructure deployment environment that delivers on the promise of supporting on-demand services like computation, software and data access, storage etc. Clouds provide inexpensive access to remote resources. Academia has not remained unaware of this trend, and several educational solutions(LMS) based on cloud technologies are already in place, especially for software as a service(SAAS) cloud. Extending the functionality to infrastructure and platform clouds(IAAS,PAAS) has not been explored yet. Learning process in academia has different stages and there is no design to classify clouds for each of them especially for computer science students. For implementing IAAS and PAAS clouds the virtual machines (VMs) in Amazon Clouds are used. But recent events suggest that even a robust and efficient Amazon architecture suffers from down times and complications. In this paper, we use the architecture and the organization of a Mashup Container that supports the deployment and the execution of Event Driven Mashups and an IO optimization technique to simplify management, lower costs and improve performance of cloud servers. In collaboration with PaaS, Virtualization provides an opportunity for extension of independent virtual resources based on available physical systems. Finally, the results of virtualization of mashup container through its supporting scalability and fault tolerance in cloud computing environment.
Full Paper

IJCST/33/4/
A-993
   142 Resolution Trees for Tentative Information

V. Redya Jadav, M. Nageswara Rao, Ella Sarada

Abstract

Traditional decision tree classifiers work with data whose values are known and precise. We extend classical decision tree building algorithms to handle data tuples with uncertain values. Extensive experiments have been conducted which show that the resulting classifiers are more accurate than those using value averages. Since processing pdfs is computationally more costly than processing single values (e.g.,averages), decision tree construction on uncertain data is more CPU demanding than that for certain data.We extend such classifiers to handle data with uncertain information. Value uncertainty arises in many applications during the data collection process. Example sources of uncertainty include measurement/quantization errors, data staleness, and multiple repeated measurements.With uncertainty, the value of a data item is often represented not by one single value, but by multiple values forming a probability distribution. Rather than abstracting uncertain data by statistical derivatives (such as mean and median), we discover that the accuracy of a decision tree classifier can be much improved if the “complete information” of a data item (taking into account the probability density function (pdf)) is utilized. To tackle this problem, we propose a series of pruning techniques that can greatly improve construction efficiency.
Full Paper

IJCST/33/4/
A-994
   143 Time Series Classification and Prediction using Kernel Estimates

Nakka Sindhuri, D. Haritha

Abstract

It has long been known that Dynamic Time Warping (DTW) is superior to Euclidean distance for classification and clustering of time series. However, until lately, most research has utilized Euclidean distance because it is more efficiently calculated. A recently introduced technique that greatly mitigates DTWs demanding CPU time has sparked a flurry of research activity. However, the technique and its many extensions still only allow DTW to be applied to moderately large datasets. In addition, almost all of the research on DTW has focused exclusively on speeding up its calculation; there has been little work done on improving its accuracy. In this work, we target the accuracy aspect of DTW performance and introduce a new framework that learns arbitrary constraints on the warping path of the DTW calculation Apart from improving the accuracy of classification, our technique as a side effect speeds up DTW by a wide margin as well. Along with specified approached kernel functions are used to predict the behavioral patterns of the time series. IN this paper polynomial and sigmoid kernel estimates are used.
Full Paper

IJCST/33/4/
A-995
   144 Clustering Through Multi Point of View Based Comparison Evaluate

Tenkati Kalyani, T. Ravi Kumar

Abstract

The major difference between a traditional dissimilarity/similarity measure and ours is that the former uses only a single viewpoint, which is the origin, while the latter utilizes many different viewpoints, which are objects assumed to not be in the same cluster with the two objects being measured. Using multiple viewpoints, more informative assessment of similarity could be achieved. Theoretical analysis and empirical study are conducted to support this claim. All clustering methods have to assume some cluster relationship among the data objects that they are applied on. Similarity between a pair of objects can be defined either explicitly or implicitly. In this paper, we introduce a novel multiviewpoint based similarity measure and two related clustering methods. Two criterion functions for document clustering are proposed based on this new measure. We compare them with several well-known clustering algorithms that use other popular similarity measures on various document collections to verify the advantages of our proposal.
Full Paper

IJCST/33/4/
A-996
   145 Endowing Privacy Prior to Data Distribution

P. Sunitha, Dr. E.V. Prasad

Abstract

In every organization the sensitive data has to share with other trusted agents (third parties). Whenever the data distribute to the third parties sometimes we found sensitive data in unauthorized place (e.g., on the web or somebody’s laptop). In every enterprise data leakage is a serious problem faced by it. Sometimes leakage is observed or not observed by owner. Leak data may be source code, intellectual property, price lists, social security code, etc based on type of company or organization. The owner of the data must estimate the chance that leaked data came from one or more agents, as opposed to having been independently gathered by others. Here we implement allocation strategies while allocating data to the agents. These methods do not rely alterations of released data (e.g., watermarks). Here we use “realistic but fake” data records to improve the chances of detecting guilt agent. Here we can also use k-anonymity technique to provide privacy to the data using generalization method before distribute to the agents.
Full Paper

IJCST/33/4/
A-997
   146 Image Compression Based on WPT, WP-SPECK & WP-SPECK SOM Method

Vivek Kumar Jain

Abstract

Image Compression is performing by wavelet packet transform. Wavelet packet transform is a transform function. This function can change data one form to another without any loss of information. In paper, we use block code speck for zero ordering tree. The value of PSNR (Peak Signal to Noise Ratio) is improved by the zero ordering trees, but same quality of degraded. Because on the time of zero ordering trees, values of MSE are changed, so the occurred bit pixel error raises the value of MSE. Through Self-Organized Mapped network (SOM), we maintain zero ordering trees and improved the PSNR value and image compression ratio.
Full Paper

IJCST/33/4/
A-998
   147 Cancer Detection by using Advanced Technology

K. Naga Divya, S. Vasundra, T. Sri Lakshmi, A. Madhuri

Abstract

Diagnosis can be achieved by building a model of a certain organ under surveillance and comparing it with the real time physiological measurements taken from the patient. This paper deals with the presentation of the benefits of using Data Mining techniques in the Computer-Aided Diagnosis (CAD), focusing on the cancer detection, in order to help doctors to make optimal decisions quickly and accurately. In the field of the noninvasive diagnosis techniques, the Endoscopic Ultrasound Elastography (EUSE) is a recent elasticity imaging technique, allowing characterizing the difference between malignant and benign tumors. Digitalizing and summarizing the main EUSE sample movies features in a vector form concern with the use of the Exploratory Data Analysis (EDA). Neural networks are then trained on the corresponding EUSE sample movies vector input in such a way that these intelligent systems are able to offer a very precise and objective diagnosis, discriminating between benign and malignant tumors. A concrete application of these Data Mining techniques illustrates the suitability and the reliability of this methodology in CAD.
Full Paper

IJCST/33/4/
A-999
   148 E-Learning Portal: Its Development & its Scope

Roohinder Kaur

Abstract

With the vast development of various technologies, learning today is no longer confined to classrooms with lecture delivery as the only method of conveying knowledge, rather, an electronic means of learning has continued to evolve. Electronic learning (e-Learning), which facilitates education using communications networks, has made learning possible from anywhere at any time using the Internet, wide area networks or local area networks. Notably, e-Learning applications which have now become central to the learning process may be developed using proprietary programming tools and the process of acquiring and using them to develop large software application is not only complex but require a huge sum of money. A viable alternative is to utilize the open source software platform that allows software engineers and institutions the right to reuse, study, distribute and localize to satisfy user’s requirements.
Full Paper

IJCST/33/4/
A-1000
   149 Clustering Protocols in WSNs: A study

Sushruta Mishra, Alok Chakrabarty, Bikash Mohapatra, Tahira Mazumdar

Abstract

New advancements in the technology of wireless sensors have contributed to the development of special protocols which are unique to sensor networks where minimal energy consumption is vital and very important. As a result, the focus and effort of researchers is on designing better routing algorithms for a given application and network architecture of interest. Flat-based routing protocols have been found to be less advantageous to clustering routing protocols when their performance are compared in a largescale wireless sensor network scenario. This is due to the fact that clustering operation reduces the amount of redundant messages that are transmitted all over the network when an event is detected. This paper is an investigation of cluster-based routing protocols for wireless sensor networks.
Full Paper

IJCST/33/4/
A-1001
   150 Education Methodology and Instructional Design for E-Learning in Universities

Sonymol Koshy, Dr. Sunil Kumar, Dr. U. V. S Teotia

Abstract

The Adoption of Open Education by higher education institutions in a country like India will facilitate in teaching maximum students economically with the help of Web 2.0 technologies and Open Education Resource. In addition ,this also opens up learning opportunities and methodology that are learner centric rather than institution centric. Open source software offers many approaches to address the technical challenges in providing optimal delivery of resources for E-learning .The availability and sustenance of free ,quality online resources , necessary infrastructure to provide accessibility and the essential skills required for both online teaching and learning and finally a thorough method of testing to keep up the reputations and value of the degrees issued, must be explored by the universities and institutions who are committed to the purpose of providing learning opportunities to all. This paper explores the potential of Open Education , the possible impact of Web Technologies on education in the Indian context and the exciting new learning opportunities for the students and higher educational Institutions .The concepts are explained using clear diagrams by the author.
Full Paper

IJCST/33/4/
A-1002
   151 Performance and Analysis of DYMO, AODV and DSR Routing Protocols by Effects of Velocity in Mobile Ad-Hoc Networks

Dr. S. Tamilarasan

Abstract

Mobile Ad-Hoc Networks (MANETs) are promising wireless technology. MANETs are collections of mobile nodes that can dynamically form temporary networks without an infrastructure or centralized administration. Each node in MANET can communicated with other neighbored nodes using multi-hop wireless link. These nodes can be arbitrarily located and can move freely. Hence, the network topology can change rapidly and dynamically, MANET faces several challenges such as routing. Several routing algorithms have been proposed for MANETs. In this article we can analyses the performance of the three routing protocols based on effects of mobility. We can consider the performance analysis metrics such as throughput, packet delivery ratio, average end-to-end delay and average jitter.
Full Paper

IJCST/33/4/
A-1003
   152 Trusted Destination Sequenced Distance Vector Routing Protocol for Mobile Ad-Hoc Network

Mohd Zamir Arif, Gaurav Shrivasta

Abstract

A Mobile Ad hoc Network (MANET) is a self-organized system comprised of mobile wireless nodes with peer relationships. MANETs can operate without fixed infrastructure and can survive rapid changes in the network topology. Due to multi-hop routing and absence of any trusted third party in open environment, MANETs are vulnerable to attacks by malicious nodes or unwanted packet forwarding through UDSDV (Un-trust Destination Sequence distance vector routing). In order to decrease the unwanted data flooding and routing misbehaviour from malicious nodes or UDSDV node, we introduce the concept of trust based destination sequence distance vector routing that is TDSDV module, if we apply TDSDV routing and same time UDSDV node presence in the network so TDSDV node protect through unwanted packet flooding of the network and increases network performance. In this paper we proposed TDSDV trust based destination sequence distance vector routing and analyze the behaviour on the network parameter like throughput, packet delivery ratio, end-to-end delay, routing overhead and energy consume via mobile nodes in all three cases DSDV, UDSDV and TDSDV routing.
Full Paper

IJCST/33/4/
A-1004
   153 Website Link Structures for Dependable Website Navigation and Search

Ram Kinkar Pandey, Dr. Prashant Kumar Pandey

Abstract

The hyper-linking nature of the Web has differentiated itself from traditional forms of information resources. Hyperlinks are created between Web pages by Web page authors mainly to assist users in navigating the vast amount of information on the Web. Hyperlinks can reveal conceptual relationships between linked Web pages. Mining these relationships in the link structure of a Web site can help users navigate and search for desired information on the Web site effectively and efficiently. First, the link structure of a Web site is visualized for user navigation. Second, Web pages are clustered to reduce information overload problem in user navigation. Third, hyperlinks are used to get authority-based rankings of Web pages for user information search on the Web. This work describes a set of heuristics to optimize the web site usability and link structure. In particular, we show how these heuristics can be used to (1) simplify the user navigational needs through automatically reducing the transient links and hence providing shortcuts to pages in demand, and (2) automatically provide users with redirects to popular pages in the web site. These heuristics are hoped to compliment previously reported graph-based techniques for web site link optimization.
Full Paper

IJCST/33/4/
A-1005
   154 Security Protocol for Controlling Network in P2P Decentralized Systems

P. Bhargava, Satya P Kumar Somayajula, Dr. C.P.V.N.J. Mohan Rao

Abstract

Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads among peers which causes vulnerable to peers who cheat, propagate malicious code, leech on the network. This paper presents a security protocol, an ambitious approach to protect the P2P network without using any central component. The absence of a central component in a P2P network poses unique challenges. To meet these challenges we use peer secure reputations to determine whether a peer is a malicious peer or a good peer. Once detected, the malicious peers are ostracized from the network as the good peers do not perform any transactions with the malicious peers. The peers are identified by identity certificates in the P2P network to which its reputations are attached. The identity certificates are generated with certificate authority maintained by all peers on their own. So the peer reputation information pertaining to all its past transactions with other peers in the network, which is stored locally. As a result, the security protocol coupled with certification authority not only protects the reputation information from its owner, but also facilitates secure exchange of reputation information between the two peers participating in a transaction.
Full Paper

IJCST/33/4/
A-1006
   155 Efficient CPU Scheduling using Genetic Algorithm Approach

Anu Taneja, Amit Kumar

Abstract

Operating system’s performance and throughput are highly affected by CPU scheduling. The scheduling is considered as an NP problem. An efficient scheduling improves system performance. We use genetic algorithms to provide efficient process scheduling. This paper evaluates the performance and efficiency of the proposed genetic algorithm in comparison with other CPU Scheduling algorithms and will prove that proposed GA-based algorithm gives better performance measure and provides optimal solution. So on the basis of above concepts; a genetic scheduling algorithm has been developed. Hence the problem is comparing the performance of different scheduling algorithms and proving that genetic algorithms provide the optimal solution.
Full Paper

IJCST/33/4/
A-1007
   156 Contribution Based Clustering Algorithm for Content- Based Image Retrieval (CBIR)

I. Srinivasa Rao, A. Gauthami Latha

Abstract

Clustering is a form of unsupervised classification that aims at grouping data points based on similarity. In this paper, we propose a new partitional clustering algorithm based on the notion of ‘contribution of a data point’. We apply the algorithm to contentbased image retrieval and compare its performance with that of the k-means clustering algorithm. Unlike the k-means algorithm, our algorithm optimizes on both intra-cluster and inter-cluster similarity measures. It has three passes and each pass has the same time complexity as an iteration in the k-means algorithm. Content based image retrieval (CBIR) is done using the image feature set extracted from Haar Wavelets applied on the image at various levels of Decomposition. Here the database image features are extracted by applying Haar Wavelets on gray plane (average of red, green and blue) and color planes (red, green and blue components). Our experiments on a bench mark image data set reveal that our algorithm improves on the recall at the cost of precision.The results show that precision and recall of Haar Wavelets are better than complete Haar transform based CBIR, which proves that Haar Wavelets gives better discrimination capability in image retrieval at higher query execution speed, per higher level Haar Wavelets.
Full Paper

IJCST/33/4/
A-1008
   157 Load Balancer on Client-Side Using Cloud

N. Venkatesh, B. Prasad

Abstract

“Cloud computing” is a term, which involves virtualization, distributed computing, networking, software and web services. A cloud consists of several elements such as clients, datacenter and distributed servers. It includes fault tolerance, high availability, scalability, flexibility, reduced overhead for users, reduced cost of ownership, on demand services etc. Central to these issues lies the establishment of an effective load balancing architecture. The load can be CPU load, memory capacity, delay or network load. Load balancing is the process of distributing the load among various nodes of a distributed system to improve both resource utilization and job response time while also avoiding a situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work. Load balancing ensures that all the processor in the system or every node in the network does approximately the equal amount of work at any instant of time. This technique can be sender initiated, receiver initiated or symmetric type
Full Paper

IJCST/33/4/
A-1009
   158 An Efficient Method of Load Balancing With Fault Tolerance for Mobile Grid

Itishree Behera, Chita Ranjan Tripathy, Satya Prakash Sahoo

Abstract

With rapid progress in mobile communication and computing, the grid computing has gained immense importance in academics, industry and military. Load balancing enables in effective allotment of resources to improve the overall performance of the system. With the increase in system size, the probability of occurrence of fault becomes high. Hence, a fault tolerant model is essential in grid. In this paper, a decentralised load balancing model with fault tolerance for mobile grid computing system is proposed. Two algorithms are proposed: one for decentralized load balancing and the other for fault tolerance. The efficiency and time complexity of the proposed algorithms are found better compared to the existing works. The simulation results are reported.
Full Paper

IJCST/33/4/
A-1010
   159 Proposed Stability Aware Routing Algorothm for Multipath Mobile Ad-Hoc Networks

Udai Shankar

Abstract

Previous work on routing in MANETs has resulted in numerous routing protocols that aim at satisfying constraints such as minimum hop or low energy. Existing routing protocols often fail to discover stable routes between source and sink when route availability is transient, i.e., due to mobile devices switching their network cards into low-power sleep modes whenever no communication is taking place. In this paper, we introduce a new approach stability aware source routing protocol that is capable of predicting the stability (i.e., expiration time) of multiple routes. Proposed protocol selects the route that minimizes hop count while staying available for the expected duration of packet transmission. The stability aware routing (SAR) resolve the problem of SADSR protocol indicate a significant increase in route discovery success rate with comparable route establishment and maintenance overheads.
Full Paper

IJCST/33/4/
A-1011
   160 A Novel Implementation of Electrical Appliances Controller Using FPGA

Dr. M. Kamaraju, V. Swathi

Abstract

Now a days, the usage of electronic appliances is increasing. In the offices, employees left their rooms without switching off the electrical appliances. So the power wastage increases. This power wastage can be reduced by making an intelligent system, which can control the electrical appliances automatically based on the requirement. The whole system is implemented and tested on FPGA.
Full Paper

IJCST/33/4/
A-1012
   161 Multilayer Image Steganography based on DCT Transform and Mod16 Algorithm

Venkata Satish Babu. B, Sri CH. Ratna Babu

Abstract

The combination of steganography and cryptography provides privacy in communication and security to confidential information. This paper presents a multi-layer steganography based on 2DDCT and Mod16 Algorithm.2D-DCT based steganography using mask matrix is performed at level1 and initially at level2 an encryption algorithm is applied and then Mod16 algorithm is applied to obtain final stego image. The process of extracting the confidential information is the reverse process to the previously done. While maintaining the image imperceptible, the Mod16 algorithm has achieved greater embedding capacity and additional security. Experimental results demonstrate that proposed system is resistant to steganalysis methods like χ2 attack [7].
Full Paper

IJCST/33/4/
A-1013
   162 Civilizing Exploitation of Transportation Exhaust

S. Harish, M. Srinivas

Abstract

A key advantage of Infrastructure-as-a-Service (IaaS) exhausts is providing users on-demand access to resources.Many applications and workflows are designed for recoverable systems where interruptions in service are expected. For instance, many scientists utilize High Throughput Computing (HTC)-enabled resources, such as Condor, where jobs are dispatched to available resources and terminated when the resource is no longer available. However, to provide on-demand access, exhaust providers must either significantly overprovision their infrastructure (and pay a high price for operating resources with low utilization) or reject a large proportion of user requests (in which case the access is no longer on-demand). At the same time, not all users require truly on-demand access to resources. We propose a exhaust infrastructure that combines on-demand allocation of resources with opportunistic provisioning of cycles from idle exhaust nodes to other processes by deploying backfill Virtual Machines (VMs). We demonstrate that a shared infrastructure between IaaS exhaust providers and an HTC job management system can be highly beneficial to both the IaaS exhaust provider and HTC users by increasing the utilization of the exhaust infrastructure (thereby decreasing the overall cost) and contributing cycles that would otherwise be idle to processing HTC jobs. For demonstration and experimental evaluation, we extend the Nimbus exhaust computing toolkit to deploy backfill VMs on idle exhaust nodes for processing an HTC workload. Initial tests show an increase in IaaS exhaust utilization from 37.5% to 100% during a portion of the evaluation trace but only 6.39% overhead cost for processing the HTC workload.
Full Paper

IJCST/33/4/
A-1014
   163 A Comparative Study on MANET Routing Protocols

Pooja Shah, Parita Oza

Abstract

MANETs are special type of Ad-hoc networks wherein the nodes are mobile and the topology may not be fixed. As there is no central entity to govern the network, all the nodes forming the network cooperate in network management and message passing. This paper is based on the study routing issues in MANET and the most used routing protocols in MANET viz. AODV, DSDV, DSR and AOMDV. An analysis of the said protocols is done on the basis of packet delivery fraction and average end to end delay using NS-2. The paper also proposes a change in AODV for performance improvement.
Full Paper

IJCST/33/4/
A-1015
   164 Contrasting Clustering Techniques for Web Personalization in Web Mining

Vikas Bhatnagar, Ruchi Davey

Abstract

Web mining being application of data mining is one of the next futuristic technologies with web usage mining. Which process web log data for tracing user need and behavior so that web personalization can be performed. It can be used for modifying web services and web pages which are requested by the users. For this concern clustering techniques are used. We have analyzed few clustering techniques and compared them. The better clustering techniques will result in more adequate web personalization
process.
Full Paper

IJCST/33/4/
A-1016
   165 Liability Discovery and Alleviation in Multilevel Converter STATCOMs

Y. Sravani, M. Lokya, Ch. Hari Krishna

Abstract

Many static synchronous compensators (STATCOMs) utilize multilevel converters due to the following: 1) lower harmonic injection into the power system; 2) decreased stress on the electronic components due to decreased voltages; and 3) lower switching losses. One disadvantage, however, is the increased likelihood of a switch failure due to the increased number of switches in a multilevel converter. A single switch failure, however, does not necessarily force an (2n + 1)-level STATCOM offline. Even with a reduced number of switches, a STATCOM can still provide a significant range of control by removing the module of the faulted switch and continuing with (2n − 1) levels. This paper introduces an approach to detect the existence of the faulted switch, identify which switch is faulty, and reconfigure the STATCOM. This approach is illustrated on an eleven-level STATCOM and the effect on the dynamic performance and the total harmonic distortion (THD) is analyzed.
Full Paper

IJCST/33/4/
A-1017
   166 Image Data Reduction Using Non Negative Matrix Factorization

Ch. Raghu kumar, G. Sudhakar

Abstract

Matrix factorization techniques have been frequently applied in information retrieval, computer vision and pattern recognition. Among them, Non-negative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts-based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low dimensional manifold embedded in a high dimensional ambient space. One hopes then to find a compact representation which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. This Paper presents novel algorithm, called Graph Regularized Non-negative Matrix Factorization of image for multimedia Mining (GNMFMM), In GNMFMM, an affinity graph is constructed to encode the image for multimedia, and seek a matrix factorization which respects the graph structure.
Full Paper

IJCST/33/4/
A-1018
   167 Bayesian Classifiers Programmed in SQL Using PCA

Venkat Nagarjuna K, P Venkata Subba Reddy, G Lakshmi Tulasi

Abstract

The Bayesian classifier is a fundamental classification technique. We also consider different concepts regarding Dimensionality Reduction techniques for retrieving lossless data. In this paper, we proposed a new architecture for pre-processing the data. Here we improved our Bayesian classifier to produce more accurate models with skewed distributions, data sets with missing information, and subsets of points having significant overlap with each other, which are known issues for clustering algorithms.so,we are interested in combining Dimensionality Reduction technique like PCA with Bayesian Classifiers to accelerate computations and evaluate complex mathematical equations. The proposed architecture in this project contains the following stages: pre-processing of input data, Naïve Bayesian classifier, Bayesian classifier, Principal component analysis, and database. Principal Component Analysis(PCA) is the process of reducing components by calculating Eigen values and Eigen Vectors . We consider two algorithms in this paper: Bayesian Classifier based on K-Means(BKM) and Naïve Bayesian Classifier Algorithm(NB).
Full Paper

IJCST/33/4/
A-1019
   168 A New Intrusion Exposure Structure Using Support Vector Mechanisms and Hierarchical Clustering

Ch. Balarama Krishna, K. Rajesh Kumar

Abstract

Whenever an intrusion occurs, the security and value of a computer system is compromised. Network-based attacks make it difficult for legitimate users to access various network services by purposely occupying or sabotaging network resources and services. This can be done by sending large amounts of network traffic, exploiting well-known faults in networking services, and by overloading network hosts. Intrusion Detection attempts to detect computer attacks by examining various data records observed in processes on the network and it is split into two groups, anomaly detection systems and misuse detection systems. Anomaly detection is an attempt to search for malicious behaviour that deviates from established normal patterns. Misuse detection is used to identify intrusions that match known attack scenarios. This paper presents a study for enhancing the training time of SVM, specifically when dealing with large data sets, using hierarchical clustering analysis. We use the Dynamically Growing Self-Organizing Tree (DGSOT) algorithm for clustering because it has proved to overcome the drawbacks of traditional hierarchical clustering algorithms (e.g., hierarchical agglomerative clustering). Clustering analysis helps find the boundary points, which are the most qualified data points to train SVM, between two classes. We present a new approach of combination of SVM and DGSOT, which starts with an initial training set and expands it gradually using the clustering structure produced by the DGSOT algorithm. We compare our approach with the Rocchio Bundling technique and random selection in terms of accuracy loss and training time gain using a single benchmark real data set. We show that our proposed variations contribute significantly in improving the training process of SVM with high generalization accuracy and outperform the Rocchio Bundling technique.
Full Paper

IJCST/33/4/
A-1020
   169 Disaster Management System for Dengue

M. Nagabhushana Rao, S.V.V.D.Venugopal

Abstract

Recent developments in information technology have enabled collection and processing of vast amounts of personal data, business data and spatial data. It has been widely recognized that spatial data analysis capabilities have not kept up with the need for analyzing the increasingly large volumes of geographic data of various themes that are currently being collected and archived. Our study is carried out on the way to provide the mission-goal strategy (requirements) to predict the disaster, an Intelligence System. Data mining or knowledge discovery is becoming more important as more and more corporate data is being computerized. Intelligent application algorithms ideal for finding the rules and unknown information from the vast quantities of computer data. The Intelligence system is to obtain and process the data, to interpret the data, and to design the algorithms for decision makers (Health Companion) as a basis for action. The distribution technique with in Self Adaptive Disaster Management System establishes the foreground for architectural implementation in heterogeneous environment for computational, contextual cooperative design sets. Network architecture for disaster identification is designed. The Intelligence in each of these algorithms are provided the point and multi-point decision making system to capacitive for evaluation of spreading the cholera and dengue. Our contribution in this paper is to design self adaptive disaster Algorithms to identify spreading of the cholera and Dengue.
Full Paper

IJCST/33/4/
A-1021
   170 Visual Wrapper Based Structural Knowledge Discovery from Deep Webpages

B. V. V. S. Prasad, E Srinivas, M Anusha, G Thirupathi

Abstract

Structured knowledge discovery from deep web pages is an important research concept today for more crawler based search engines like Yahoo and Google. Previous data extraction mechanisms have their own inherent limitations, because they are web page programming language dependent precisely HTML dependent. In this paper knowledge discovery from Deep Web is implemented by using visual wrappers which are web page programming language independent approaches. This methodology utilizes the visual feature based wrappers for knowledge discovery include structured data boundary extraction(SDBE) and structured data item extraction (SDIE).Experiments show that our approach have more precision and recall values than earlier systems like DEPTA, IEPAD, Road Runner which are web page programming language dependent approaches.
Full Paper

IJCST/33/4/
A-1022
   171 Second LSB based Image Steganography

Anil Khurana, B. Mohit Mehta

Abstract

This paper presents the Image steganography in second Least Significant Bit (LSB) in an grayscale and RGB image. In LSB based Steganography embed the text message or secret message in least significant bits of digital picture and in second LSB the text message or secret messages embedded in second least significant bit of digital picture. In this paper we are showing the results we can obtain from inserting a secret message in second LSB.
Full Paper

IJCST/33/4/
A-1023
   172 Comparison of LSB and MSB based Image Steganography

Anil Khurana, B. Mohit Mehta

Abstract

This paper presents the comparison of Least significant bit (LSB) and Most significant bit steganography in an grayscale or RGB image. LSB based Steganography embed the text message or secret message in least significant bits of digital picture and MSB based steganography embed the text message or secret message in most significant bit of digital picture.In this paper we are showing the difference in embedding the data in an image in both of the cases LSB as well as MSB steganography
Full Paper

IJCST/33/4/
A-1024
   173 Development of Closed Loop Chopper Controlled Drive for PMDC Motors Used in Orthopedic Surgical Simulators

G. Murugananth, Dr. S. Vijayan

Abstract

PMDC motors are widely used in orthopedic surgical simulators. In this paper a closed loop chopper control scheme is developed for this motor. The drive circuit has two loops an inner current control loop and an outer speed control loop. The chopper drive is simulated using Matlab/Simulink. The inner current loop employs a hysteresis controller and the outer loop consists of a PI controller. The responses of both the loops were studied. The simulated results reveal that efficient control of the motor can be achieved by using the closed loop drive system.
Full Paper

IJCST/33/4/
A-1025
   174 Swarm Intelligence Based MANET Routing Protocol

M. S. Lavanyakrishnaveni, N. Rama Krishnaiah

Abstract

A Mobile Ad-Hoc Network (MANET) is a set of mobile nodes which communicate over radio and do not need any infrastructure. These kinds of networks are very flexible and suitable for several situations and applications, thus they allow the established temporary communication without pre installed infrastructure due to the limited transmission range of wireless interfaces communication traffic has to be relayed over several intermediate nodes to enable the communication between two nodes. Therefore this kind of networks is also called mobile multi-hop ad-hoc networks. The main problem in mobile ad-hoc networks is still finding the route between the communication end-points. This is aggravated through the node mobility. In the literature one can find many different approaches which try to handle this problem, but there is no routing algorithm which fits in all cases, this paper presents a new approach for an on-demand ad-hoc routing algorithms. This is based on swarm intelligence. Ant colony algorithms are subset of swarm intelligence and consider the ability of simple ants to solve complex problems by cooperation. The interesting point is the ants do not need any direct communication for the solution process, instead they communicate by stigmergy. The notion of stigmergy means the indirect communication of individuals through modifying their environment. Several algorithms which are based on the ant colony problems were introduced in recent years to solve different problems. This paper proposing a protocol called AntHocNet and is a hybrid ACO routing algorithm and path caching technique for better routing. This protocol is applied to multipath and dynamic networks, that is, creating multiple paths to transmit data from source to destination in the same data session. When the network topology changes, then it must be restored quickly and this is achieved through a new route discovery process.
Full Paper

IJCST/33/4/
A-1026
   175 Optimizing the Query Results Maneuvers Based on Concept Hierarchies

A. Krishna Mohan, J. D. Rajendra, M. H. M Krishna Prasad

Abstract

Search and navigating the query results on various large databases in different domains often gives a vast number of results in which only a small number of results required by the user. To reduce this information overload problem categorization and ranking techniques were used earlier. Efficient navigation through results categorization and ranking is the focus of the work. In this paper we are using a combined approach of ranking and categorization. In Our approach we are also using a clustering technique that performs offline query processing of all the query results of the users. Later we are using an efficient navigation over these clusters that reduces the navigational cost so that user can easily navigates and match his needs. We are also proposing a generalized Top down approach of efficient navigation of database query results for all databases. We show experimentally that system out performs state of the art categorization system and works well for the data sets in different domains.
Full Paper

IJCST/33/4/
A-1027
   176 Finding of Weighted Sequential Web Access Patterns for Effective Web Page Recommendations

K. Suneetha, Dr. M. Usha Rani

Abstract

Recommender systems aim at directing users through this information space, toward the resources that best meet their needs and interests by extracting knowledge from the previous users’ interactions. Currently much research is focus on web page recommendations using sequential pattern mining techniques. Sequential access pattern mining discovers interesting and frequent user access patterns from web logs. Most of the previous studies have adopted Apriori-like sequential pattern mining techniques, which faced the problem on requiring expensive multiple scans of databases. In this paper a traditional sequential pattern mining algorithm called prefixspan is modified by incorporating two measures such as, spending time and recent view. Then, the weighted sequential patterns are utilized to construct the recommendation model using the Patricia trie-based tree structure. Finally, the recommendation of the current users is done with the help of markov model.
Full Paper

IJCST/33/4/
A-1028
   177 Different Types of Layouts in SQL to Prepare Datasets & Reports

G. Kamala Kumari, N. Tulasi Raju, S. Uma Maheswara Rao

Abstract

Preparing Reports and Dataset are difficult task in data mining. Our proposed horizontal aggregations provide several unique features and advantages. First, they represent a template to generate SQL code from a data mining tool. Such SQL code automates writing SQL queries, optimizing them and testing them for correctness. This SQL code reduces manual work in the data preparation phase in a data mining project. Second, since SQL code is automatically generated it is likely to be more efficient than SQL code written by an end user. For instance, a person who does not know SQL well or someone who is not familiar with the database schema (e.g. a data mining practitioner). Therefore, data sets can be created in less time. Third, the data set can be created entirely inside theDBMS. In modern database environments it is common to export denormalized data sets to be further cleaned and transformed outside a DBMS in external tools (e.g. statistical packages). Unfortunately, exporting large tables outside a DBMS is slow, creates inconsistent copies of the same data and compromises database security. Therefore, we provide a more efficient, better integrated and more secure solution compared to external data mining tools. Horizontal aggregations just require a small syntax extension to aggregate functions called in a SELECT statement. Alternatively, horizontal aggregations can be used to generate SQL code from a data mining tool to build data sets for data mining analysis.We propose three fundamental methods to evaluate horizontal aggregations: CASE: Exploiting the programming CASE construct; SPJ: Based on standard relational algebra operators (SPJ queries); PIVOT: Using the PIVOT operator, which is offered by some DBMSs. Experiments with large tables compare the proposed query evaluation methods. Our CASE method has similar speed to the PIVOT operator and it is much faster than the SPJ method. In general, the CASE and PIVOT methods exhibit linear scalability, whereas the SPJ method does not.
Full Paper

IJCST/33/4/
A-1029
   178 Multideployment and Multisnapshotting on Clouds

B. Subbarao, S. Anilkumar, Dr. P. Harini

Abstract

Infrastructure as a Service (IaaS) cloud computing has revolutionized the way we think of acquiring resources by introducing a simple change: allowing users to lease compu- tational resources from the cloud provider’s datacenter for a short time by deploying Virtual Machines (VMs) on these re- sources. This new model raises new challenges in the design and development of IaaS middleware. One of those chal- lenges is the need to deploy a large number (hundreds or even thousands) of VM instances simultaneously. Once the VM instances are deployed, another challenge is to simulta- neously take a snapshot of many images and transfer them to persistent storage to support management tasks, such as suspend-resume and migration. With datacenters growing rapidly and configurations becoming heterogeneous, it is im- portant to enable efficient concurrent deployment and snap- shotting that are at the same time hypervisor independent and ensure a maximum compatibility with different configurations. This paper addresses these challenges by proposing a virtual file system specifically optimized for virtual ma- chine image storage. It is based on a lazy transfer scheme coupled with object versioning that handles snapshotting transparently in a hypervisor-independent fashion, ensuring high portability for different configurations. Large-scale ex- periments on hundreds of nodes demonstrate excellent per- formance results: speedup for concurrent VM deployments ranges from a factor of 2 up to 25, with a reduction in band- width utilization of as much as 90%.
Full Paper

IJCST/33/4/
A-1030