IJCST Logo



 

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY (IJCST)-VOL III ISSUE II, VER 6, APRIL TO JUNE, 2012


International Journal of Computer Science and Technology Vol. 3 Issue 2, Ver. 6
S.No. Research Topic Paper ID
   207 Robustly Object Location Monitoring Scheme in Wireless Sensor Networks
Sai Lakshmi Velpula, Y. V. Adi Satyanarayana, P.Pedda Sadhu Naik

Abstract

Location monitoring systems are used to detect human activities and provide monitoring services. We consider an aggregate location monitoring system where wireless sensor nodes are counting sensors that are only capable of detecting the number of objects within their sensing areas. Actually the personal location is being monitored by a third party (untrusted server), are vulnerable to privacy threats. The wireless sensor networks allow users to access services privately by using a series of routers to hide the client’s IP address from the server. We propose a privacy-preserving location monitoring system for wireless sensor networks. In our system, we design two in-network location anonymization algorithms, namely, Cloaked Area Determination Algorithm and quality enhanced histogram algorithm that will help the system to enable and provide high-quality location monitoring services for system users, while preserving personal location privacy. The Cloaked Area determination algorithm aims to minimize communication and computational cost, A quality enhanced histogram approach is used that estimates the distribution of the monitored persons based on the gathered aggregate location information. Then, the estimated distribution is used to provide location monitoring services through answering range queries.
Full Paper

IJCST/32/6/
A-816
   208 E-mails Mining using Generalized Addressing Patterns (GAP)
Lakshmi Sravani Grande, K. Mallikarjuna Mallu, P.Pedda Sadhu Naik

Abstract

Emails become an important medium of communication. A user may receive tens or even hundreds of emails every day. Handling these emails takes much time. Therefore, it is necessary to provide some automatic approaches to relieve the burden of processing the emails. A straightforward method is to group the similar emails by supervised classifications such as mail-id, to-mail-id, subject, message, sending-time, attachments. Email mining is a process of discovering useful patterns from emails. Clustering techniques can be applied over email data to create groups of similar emails. In our algorithm, natural language processing techniques and frequent item set mining techniques are utilized to automatically generate meaningful Generalized Addressing Patterns (GAPs) from mailid, to-mail-id, subject, message, sending-time, attachments of emails. Then we put forward a novel unsupervised approach which treats GAPs as pseudo class labels and conduct email clustering in a supervised manner, although no human labeling is involved. Our proposed algorithm is not only expected to improve the clustering performance, it can also provide meaningful descriptions of the resulted clusters by the GAPs. Experimental results on open dataset and a personal email dataset collected by ourselves demonstrate that the proposed algorithm outperforms the K-means algorithm in terms of the popular measurement F1. Furthermore, the cluster naming readability is improved by 68.5% on the personal email dataset.
Full Paper

IJCST/32/6/
A-817
   209 Proactive Motif on Nano Biotechnology Pursuing the Human Needs
J. Srinivasa Rao, G. Sudhir

Abstract

Trends are dependent on technology having some relative expansions with rapid development either on software and hardware. The new dimensions spread out from the evolution of Nano technology were diverged and differed with different methodology like Bioinformatics, Biotechnology, Nano Organics, Nano Biotechnology and many a more which were the frontiers of Nano Technology. Visions for the future of Nano technologies where given more weightage on Nano Biotechnology. The best way to achieve the integration and to be accommodating the complexity of Nano biotechnology can be coordinated and distributed with different relative Databases. ELSA experts should be encouraged to work collaboratively with science department, research institutes and industry to help explore Ethical, Legal and Social Aspects for developing Nano Biotechnology.
Full Paper

IJCST/32/6/
A-818
   210 Ensuring Integrity in Cloud Computing
G.Sunil Babu, N.Naga Subrahmanyeswari, S.Kavitha

Abstract

Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trust worthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud are indeed intact, which can be important in achieving economies of scale for Cloud Computing. In this paper we introduce Dynamic Intelligent Server (DIS), which shares CSP work and enhances the dynamic data storage allocation. we follow the methods that support for data dynamics via the most general forms of data operation, such as block modification, insertion, and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multiuser setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.
Full Paper

IJCST/32/6/
A-819
   211 Publishing and Querying the Data in Peer-to-Peer Networks
M.Madhu Chandra, K.Devi Priya

Abstract

Efficient handling of multidimensional data is a challenging issue in P2P systems. P2P is a distributed application architecture that partitions tasks or workloads among peers. Peers are equally privileged, equipotent participants in the application. Each computer in the network is referred to a node. The owner of each computer on a P2P network would set aside a portion of its resources, such as processing power, disk storage or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. A P2P-based framework supporting the extraction of aggregates from historical multidimensional data is proposed, which provides efficient and robust query evaluation. When a data population is published, data are summarized in a synopsis, consisting of an index built on top of a set of subsynopses (storing compressed representations of distinct data portions). The index and the subsynopses are distributed across the network, and suitable replication mechanisms taking into account the query workload and network conditions are employed that provide the appropriate coverage for both the index and the subsynopses.
Full Paper

IJCST/32/6/
A-820
   212 Location Anonymity in Wireless Sensor Networks
K.Chandra Sekhar, T. Sudha Rani

Abstract

Location monitoring systems are used to detect human activities and provide monitoring services. We consider an aggregate location monitoring system where wireless sensor nodes are counting sensors that are only capable of detecting the number of objects within their sensing areas. Actually the personal location is being monitored by a third party (untrusted server), are vulnerable to privacy threats. The wireless sensor networks allow users to access services privately by using a series of routers to hide the client’s IP address from the server. We propose a privacy-preserving location monitoring system for wireless sensor networks. In our system, we design two in-network location anonymization algorithms, namely, Cloaked Area Determination Algorithm and quality enhanced histogram algorithm that will help the system to enable and provide high-quality location monitoring services for system users, while preserving personal location privacy. The Cloaked Area determination algorithm aims to minimize communication and computational cost, A quality enhanced histogram approach is used that estimates the distribution of the monitored persons based on the gathered aggregate location information. Then, the estimated distribution is used to provide location monitoring services through answering range queries.
Full Paper

IJCST/32/6/
A-821
   213 Dynamic Navigation Methods on Biomedical Databases
M.S.Sudheer, K.N.V.S.S.K.Rajesh

Abstract

Search queries on biomedical databases, such as PubMed, often return a large number of results, only a small subset of which is relevant to the user. Ranking and categorization, which can also be combined, have been proposed to alleviate this information overload problem. Results categorization for biomedical databases is the focus of this work. A natural way to organize biomedical citations is according to their MeSH annotations. MeSH is a comprehensive concept hierarchy used by PubMed. In this paper, we present Query Shortening Technique (QST) used to reduce the navigation results obtained over large volumes of databases like Pub Med. the proposed method enhances the previous systems by introducing the Query Shortening Technique by using pre defined multifilters in the database according data availabity. The system incorporates the concept of hierarchies and shows the results in easy navigation. the query results are organized into a navigation tree. At each node expansion step. A small subset of the concept nodes, selected such that the expected user navigation cost is minimized. In contrast, to the previous systems, the QST method outperforms and optimizes the query result time and minimizes query result set for easy user navigation.
Full Paper

IJCST/32/6/
A-822
   214 Association Rule Mining using Count Circulation
P. Jagadeesh Babu, S. S. V. Apparao

Abstract

One of the important problems in data mining is discovering association rules from databases of transactions where each transaction consists of a set of items. The most time consuming operation in this discovery process is the computation of the frequency of the occurrences of interesting subset of items (called candidates) in the database of transactions. To prune the exponentially large space of candidates, most Existing algorithms consider only those candidates that have a user defined minimum support. Even with the pruning, the task of finding all association rules requires a lot of computation power and memory. Parallel computers offer a potential solution to the computation requirement of this task, provided efficient and scalable parallel algorithms can be designed. In this paper, we have implemented Sequential, Parallel and Count Distribution mining of Association Rules using Apriori algorithms and evaluated the performances of three algorithms on the basis of Time and Space.
Full Paper

IJCST/32/6/
A-823
   215 Extended Approach using Genetic Algorithms for Randomized Unit Testing
Vamsidhar Yendapalli, K.Ratnamala

Abstract

Randomized testing is an effective method for testing software units. Thoroughness of randomized unit testing varies widely according to the settings of certain parameters, such as the relative frequencies with which methods are called. In this paper, we follow Nighthawk, a system which uses a Genetic Algorithm (GA) to find parameters for randomized unit testing that optimize test coverage and we extend this method by using writing test before fixing bugs and Version Control Modulator Lists (VCML) which enhances the previous methods.. Designing GAs is somewhat of a black art. We therefore use a Feature Subset Selection (FSS) tool to assess the size and content of the representations within the GA. Using that tool, we can reduce the size of the representation substantially, while still achieving most of the coverage found using the full representation. These results suggest that FSS could significantly optimize meta-heuristic search-based software engineering tools. The VCLs dynamically upgrades the system.
Full Paper

IJCST/32/6/
A-824
   216 Image Search System Based on the Content in the Image
D. T. V. Dharmajee Rao, BH. Anantha Laxmi

Abstract

The information village is most depended on the search engines today. Optimizing search engines to best suit the user query is a continuous area of interest for researchers. It is more interesting when it is to retrieve images. The image as if now by various search engines like google, is being done by the keywords tagged to the images. To enhance the performance of these search engines in retrieving images to a user query by retrieving the image color, size, shape and content on the image, this paper proposes a Search System Based on Image Content.
Full Paper

IJCST/32/6/
A-825
   217 Distributed Depth Adjustment Control of Underwater Sensor Networks
U Devee Prasan, P Krishna Kishor

Abstract

In this paper, presenting the dynamics of bodies of water and their impact on the global environment requires sensing information over the full volume of water. We develop a gradient based decentralized controller that dynamically adjusts the depth of a network of underwater sensors to optimize sensing for computing maximally detailed volumetric models. We prove that the controller converges to a local minimum. We implement the controller on an underwater sensor network capable of adjusting their depths. Utilizing a depth adjustment system on an underwater sensor network provides this while also improving global sensing and communications. This paper presents a depth adjustment system for waters of up to 50m deep that connects to the AquaNode sensor network. We performed experiments characterizing the system and demonstrating its functionality. We discuss the application of this device in improving acoustic communication.
Full Paper

IJCST/32/6/
A-826
   218 An Efficient and Optimal Solution Based Document Clustering
Ch Seshadri Rao, P Surya Prabhakara Rao

Abstract

There are two important problems worth conducting research in the fields of personalized information services based on user model. One is how to get and describe user personal information, i.e. building user model, the other is how to organize the information resources, i.e. document clustering. It is difficult to find out the desired information without a proper clustering algorithm. Several new ideas have been proposed in recent years. But most of them only took into account the text information, but some other useful information may have more contributions for documents clustering, such as the text size, font and other appearance characteristics, so called visual features.In this paper we introduce a new technique called Closed Document Clustering Method (CDCM) by using advanced clustering metrics. This method enhances the previous method of cluster the scientific documents based on visual features, so called VF-Clustering algorithm. Five kinds of visual features of documents are defined, including body, abstract, subtitle, keyword and title. The thought of crossover and mutation in genetic algorithm is used to adjust the value of k and cluster center in the k-means algorithm dynamically. Experimental result supports our approach as better concept.The main aim ofthis paper is to eliminate the redundant documents and set priority to each document in the cluster. In the five visual features, the clustering accuracy and steadiness of subtitle are only less than that of body, but the efficiency is much better than body because the subtitle size is much less than body size. The accuracy of clustering by combining subtitle and keyword is better than each of them individually, but is a little less than that by combining subtitle, keyword and body. If the efficiency is an essential factor, clustering by combining subtitle and keyword can be an optimal choice. The proposed system outperforms than the previous system.
Full Paper

IJCST/32/6/
A-827
   219 Efficiency of Wireless Intrusion Detection Systems using Bayesian Classification Model
D. Srinivasa Reddy, Y. V. AdiSatyanarayana, P. P. S. Naik, B. Subba Reddy

Abstract

Network intrusion detection systems have become a standard component in security infrastructures. With the tremendous growth of network-based services and sensitive information on networks, network security is getting more and more importance than ever. Intrusion poses a serious security risk in a network environment. The ever growing new intrusion types poses a serious problem for their detection. Network intrusion detection systems have become a standard component in security infrastructures.. In this paper, we apply one of the efficient data mining algorithms called naïve bayes for anomaly based network intrusion detection. Experimental results on the KDD cup’99 data set show the novelty of our approach in detecting network intrusion. It is observed that the proposed technique performs better in terms of false positive rate, cost, and computational time when applied to KDD’99 data sets compared to a back propagation neural network based approach.
Full Paper

IJCST/32/6/
A-828
   220 A Cluster Based Cooperative Caching Mechanism in MANET
R. Sarada, Shirin Bhanu Koduri, Dr. M. Seetha

Abstract

In the near future, computing environment can be expected based on the recent progresses and advances in computing and communication technologies. Next generation of mobile communications will include both prestigious infrastructure wireless networks and novel infrastructure less mobile ad hoc networks (MANETs). A MANET is a collection of wireless nodes that can dynamically form a network to exchange information without using any preexisting fixed network infrastructure. The special features of MANET bring these technology great opportunities together with severe challenges. This thesis describes the fundamentals of ad hoc networking by giving its concept, features, and applications of MANET. Some of the technical challenges MANET poses are also presented. The routing protocols meant for wired networks cannot be used for mobile ad hoc networks because of the mobility of nodes. The ad hoc routing protocols can be divided into two classes: – table-driven and on-demand. Routing in wireless mobile ad-hoc networks should be time efficient and resource Saving. One approach to reduce traffic during the routing process is, to divide the network into clusters. This work mainly focuses on clusterbased routing protocol CBRP) and its comparative analysis with three other on demand routing protocols. We present a scheme, called global cluster cooperation (GCC) for caching in mobile ad hoc networks where network topology is partitioned into nonoverlapping clusters based on the physical network proximity. In this scheme cluster cache state (CCS), which is the information regarding the contents of all the mobile nodes (MNs) within a cluster, is maintained at each node. In case of cluster cache miss, we propose to keep global cache state (GCS) at a node called cluster state node (CSN) and we also implement new protocol is Adaptive cache schema and compare to the another three protocols on the basis on Time complexity, Number of messages and size of the messages and give the simulated results.
Full Paper

IJCST/32/6/
A-829
   221 Detecting Malicious Packets Losses Using Red Algorithm
Goje Roopa, Dr. Jayadev Gyani, Pulluri Srinivas Rao

Abstract

We consider the problem of detecting whether a compromised router is maliciously manipulating its stream of packets. In particular, we are concerned with a simple yet effective attack in which a router selectively drops packets destined for some victim. Unfortunately, it is quite challenging to attribute a missing packet to a malicious action because normal network congestion can produce the same effect. Modern networks routinely drop packets when the load temporarily exceeds their buffering capacities. Previous detection protocols have tried to address this problem with a user -defined threshold: too many dropped packets imply malicious intent. However, this heuristic is fundamentally unsound; setting this threshold is, at best, an art and will certainly create unnecessary false positives or mask highly focused attacks. We have designed, developed, and implemented a compromised router detection protocol that dynamically infers, based on measured traffic rates and buffer sizes, the number of congestive packet losses that will occur. Once the ambiguity from congestion is removed, subsequent packet losses can be attributed to malicious actions.
Full Paper

IJCST/32/6/
A-830
   222 Efficient Clustering Algorithms in Text Mining
Nataraj Gudapaty, G Loshma, Dr. Nagaratna P Hegde

Abstract

Text Mining is to process unstructured (textual) information, extract meaningful numeric indices from the text, and, thus, make the information contained in the text accessible to the various data mining (statistical and machine learning) algorithms. Information can be extracted to derive summaries for the words contained in the documents. Document clustering is a fundamental task of text mining, by which efficient organization, navigation, summarization, and retrieval of documents can be achieved. The clustering of documents presents difficult challenges due to the sparsity and the high dimensionality of text data, and to the complex semantics. K-means and PAM (partitioning around mediods) algorithms of text clustering and semantic-based vector space model, a semantic based PAM text clustering model is proposed to solve the problem on high-dimensional and sparse characteristics of text data set. The model reduces the semantic loss of the text data and improves the quality of text clustering. We propose a novel adaptive kernel K-means clustering algorithm and PAM (Partition Around Mediods) algorithm to combine textual content and citation information for clustering. In this text mining process using semantics the comparison betweenK-Means and PAM is done. The time and space complexities of these two algorithms are compared and presented as bar charts and line charts using graphs.
Full Paper

IJCST/32/6/
A-831
   223 Monitoring Protected Object Location in Sensor Wireless Networks
Mechineni Rambhupal Rao, Munigeti Benjamin Joshua

Abstract

The Connectionless Sensor Networks is nothing but wireless Sensor Network (WSNs). Much of the existing work on wireless sensor networks has focused on addressing the power and computational resource constraints of WSNs by the design of specific routing, MAC, and cross-layer protocols. Recently, there have been heightened privacy concerns over the data collected by and transmitted through WSNs. The wireless transmission required by a WSN, and the self-organizing nature of its architecture, makes privacy protection for WSNs an especially challenging problem. Locality Observing Methods are used to detect human activities and provide monitoring services. We consider an aggregate Locality Observing Method where wireless sensor nodes are counting sensors that are only capable of detecting the number of objects within their sensing areas. Actually the personal location is being monitored by a third party (untrusted server), are vulnerable to privacy threats. The wireless sensor networks allow users to access services privately by using a series of routers to hide the client’s IP address from the server. We propose a privacy-preserving Locality Observing Method for wireless sensor networks. In our system, we design two in-network location anonymization algorithms, namely, Cloaked Area Determination Algorithm and quality enhanced histogram algorithm that will help the system to enable and provide high-quality location monitoring services for system users, while preserving personal location privacy. The Cloaked Area determination algorithm aims to minimize communication and computational cost, A quality enhanced histogram approach is used that estimates the distribution of the monitored persons based on the gathered aggregate location information. Then, the estimated distribution is used to provide location monitoring services through answering range queries
Full Paper

IJCST/32/6/
A-832
   224 Performance up Gradation of SNMP using Error Fixing Method
Kumararaja Jetti, P. B. V. Raja Rao

Abstract

Since the early 1990s, there have been several attempts to secure the Simple Network Management Protocol (SNMP). The third version of the protocol, published as full standard in 2002, introduced the User-based Security Model (USM), which comes with its own user and key-management infrastructure. Since then, network operators have reported that deploying another user and key management infrastructure to secure SNMP is expensive and a reason to not deploy SNMPv3. This paper describes Error Management Utility( EMU) is an addin or added feature utility used by (NMS) is a component of An SNMP-managed network , which are used to create and apply automatic error fixing, increases the performance of SNMP. We follow the previous work nature of describes how existing security protocols operating above the transport layer and below application protocols can be used to secure SNMP. These protocols can take advantage of already deployed key management infrastructures that are used for other network management interfaces and hence their use can reduce the operational costs associated with securing SNMP. Our main contribution is a detailed performance analysis of a prototype implementation, comparing the performance of SNMPv3 over SSH, TLS, and DTLS with other versions of SNMP. We also discuss the differences between the various options to secure SNMP and provide guidelines for choosing solutions to implementor deploy.
Full Paper

IJCST/32/6/
A-833
   225 Structural Statistical Testing for Software Verification
M. Venkatesh, Y. Vamshidhar

Abstract

The main goal of Software Testing is to detect faults in Software. So Software verification process can play vital role in development of software projects. Testing Software involves gathering all test cases to detect faults but it is time and effort consuming .alternative to minimize effort and time is random testing in which tester can test random test cases but it compromise the quality of testing. The only alternative to detect faults without compromise quality testing and minimize the time and effort is to choose subset of test cases from all test case having coverage of testing equivalent to all test cases. This paper describes structural Statistical testing method uses the advantages of structural as well as statistical testing. The experimental results show that structural Statistical testing gives better results with the fitness evaluation function.
Full Paper

IJCST/32/6/
A-834
   226 Integrating K-Means Algorithm with Horizontal Aggregation to Prepare Datasets
Brahmini Saraswathi, K. T. V. Subbarao, M. M. Balakrishna

Abstract

To prepare datasets in Datamining concept is too difficult task. Conventional RDBMS usually manage tables with vertical form. To analyze data efficiently, Data mining systems are widely using datasets with columns in horizontal tabular layout. Preparing a data set is more complex task in a data mining project, requires many SQL queries, joining tables and aggregating columns. Conventional RDBMS usually manage tables with vertical form. Aggregated columns in a horizontal tabular layout returns set of numbers, instead of one number per row. The system uses one parent table and different child tables, operations are then performed on the data loaded from multiple tables. PIVOT operator, offered by RDBMS is used to calculate aggregate operations. PIVOT method is much faster method and offers much scalability. Partitioning large set of data, obtained from the result of horizontal aggregation, in to homogeneous cluster is important task in this system. K-means algorithm using SQL is best suited for implementing this operation.
Full Paper

IJCST/32/6/
A-835
   227 Implementing and Monitoring Quality Preferences in Data by using Spatial Ranking Method
Sajja Kiran Kumar, Kunapareddy Rajani Devi

Abstract

In order to handle spatial data efficiently, as required in computer aided design and geo-data applications, a database system needs an index mechanism that it help to retrieve data items quickly according to their spatial locations However, traditional indexing methods are not well suited to data objects of non-zero size located in multidimensional spaces In this paper we describe a dynamic index structure called an R-tree which meets this need, and give algorithms for searching and updating it. We present the results of a series of tests which indicate that the structure performs well, and conclude that it is useful for current database systems spatial applications
Full Paper

IJCST/32/6/
A-836
   228 Datasets Preparation Using SQL Aggregation
Ch. Kalyani, V. Durga Prasad, P. Suresh Babu

Abstract

To analyze data efficiently, Data mining systems are widely using datasets with columns in horizontal tabular layout. Preparing a data set is more complex task in a data mining project, requires many SQL queries, joining tables and aggregating columns. Conventional RDBMS usually manage tables with vertical form. Aggregated columns in a horizontal tabular layout returns set of numbers, instead of one number per row. The system uses one parent table and different child tables, operations are then performed on the data loaded from multiple tables. In a relational database, especially with normalized tables, a significant effort is required to prepare a summary data set [16] that can be used as input for a data mining or statistical algorithm. Association rule mining searches for interesting relationships among items in a given data set. Preparing a data set for analysis is generally the most time consuming task in a data mining project, requiring many complex SQL queries, joining tables and aggregating columns. Basically, a horizontal aggregation returns a set of numbers instead of a single number for each group, resembling a multi-dimensional vector. We proposed an abstract, but minimal, extension to SQL standard aggregate functions to compute horizontal aggregations We propose two fundamental methods to evaluate horizontal aggregations: CASE: Exploiting the programming CASE construct; SPJ: Based on standard relational algebra operators (SPJ queries); which is offered by some DBMSs.
Full Paper

IJCST/32/6/
A-837
   229 Key Management Scheme for an Efficient Secure Data Access Control
Gandreddy Kumari, M. Ram Bhupal

Abstract

Now a day’s Safe communications have become an important subject of research. The new service for wireless and wired networks is to provide confidentiality, authentication, authorization and data integrity. Security has always been a sensitive issue. In fact, this service becomes necessary to protect basic applications; Wireless broadcast is an effective approach for spreading data widely to a number of users. To provide safe transmission, symmetric-key-based encryption is one of the widely used method, that ensures only valid users can decrypt the data. With regard to various subscriptions, an efficient key management for distributing and changing keys is needed to control broadcast services. The main idea is to have collection members actively participate to the security of the broadcast collection, therefore reducing the communication and computation load on the source. Since the collection security is distributed among the collection members, we propose a service right certificate, to verify that a node is authorized to join the collection, and also a corresponding revocation mechanism.
Full Paper

IJCST/32/6/
A-838
   230 Key Based Public Auditability and Data Dynamics in Cloud Computing
C. Subash Chandra, Ch. Sunad

Abstract

Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In this paper we are providing security by allowing bi-inf key based
public auditability and data dynamics for data storage security in cloud computing based on Third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The support for data dynamics via the most general forms of data operation, such as block modification, insertion, and deletion, is also a significant step toward practicality, since services in Cloud. Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. The proposed Bi-inf Key reveals the client and application information while data as been moved. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.
Full Paper

IJCST/32/6/
A-839
   231 Efficient Live Streaming in Peer-to-Peer Networks
T. Kesava, K. Rajini Kumari, N. Anuragamayi

Abstract

Large-scale live media streaming is a challenge for traditional server-based approaches. To appropriately support big audiences, broadcasters must be able to allocate huge bandwidth and computational. A number of commercial peer-to-peer systems for live streaming have been introduced in recent years. In this paper, we describe a receiver-based, peer-division multiplexing engine to deliver live streaming content on a peer-to-peer network. The same engine can be used to transparently build a hybrid P2P/ CDN delivery network by adding Repeater nodes to the network. By analyzing large amount of usage data collected on the network during one of the largest viewing event in Europe, we have shown that the resulting network can scale to a large number of users and can take good advantage of available uplink bandwidth at peers.We have also shown that error-correcting code and packet retransmission can help improve network stability by isolating packet losses and preventing transient congestion from resulting in PDM reconfigurations.
Full Paper

IJCST/32/6/
A-840
   232 A Confidentiality Site Screening Protected Procedure: Sensor Networks
Yelineni Keerthi, Vadthya Redya

Abstract

As wireless sensor networks continue to grow, so does the need for effective security mechanisms. Because sensor networks may interact with sensitive data and/or operate in hostile unattended environments, it is imperative that these security concerns be addressed from the beginning of the system design. However, due to inherent resource and computing constraints, security in sensor networks poses different challenges than traditional network/ computer security. There is currently enormous research potential in the field of wireless sensor network security. In this Paper, we Propose a Site monitoring systems are used to detect human activities and provide monitoring services. We consider an aggregate Site monitoring system where wireless sensor nodes are counting sensors that are only capable of detecting the number of objects within their sensing areas. Actually the personal Site is being monitored by a third party (untrusted server), are vulnerable to privacy threats. The wireless sensor networks allow users to access services privately by using a series of routers to hide the client’s IP address from the server. We propose a privacypreserving Site monitoring system for wireless sensor networks. In our system, we design two in-network Site anonymization algorithms, namely, Circular Area Determination Algorithm and Worth enhanced histogram algorithm that will help the system to enable and provide high-Worth Site monitoring services for system users, while preserving personal Site privacy. The Circular Area determination algorithm aims to minimize communication and computational cost, A Worth enhanced histogram approach is used that estimates the distribution of the monitored persons based on the gathered aggregate Site information. Then, the estimated distribution is used to provide Site monitoring services through answering range queries
Full Paper

IJCST/32/6/
A-841
   233 Analysis of Different Types of Conditional Functional Dependency
B. Jayamma, T. V. Subba Rao, M. M. Balakrishna

Abstract

Poor data quality has been a major problem in many organizations. Erroneous and inconsistent data has costed US business hundreds of billions of dollars because of poor business decisions resulting from the poor data quality [1]. Recently, Conditional Functional Dependencies (CFDs) have shown great potential for detecting and repairing inconsistent data in relational data sets. In this paper, we have studied the problem of discovering the minimal set of constant CFDs that hold in some given data. As in previous work, we take advantage of the observations that constant CFDs essentially are 100% confidence association rules, and that the minimal set of CFDs can be produced from the set of minimal generators and their closures. We proposed new pruning criteria to further reduce the search space, removing unnecessary generators and closures. We designed an efficient algorithm based on the new pruning criteria and we evaluated it on real data sets. According to the results, the proposed algorithm is faster than the currently most efficient constant CFD discovery algorithm. We also showed how chi square can be used to measure the interestingness of CFDs.
Full Paper

IJCST/32/6/
A-842
   234 Efficiently Removing Inconsistence Data using Fastest Conditional Functional Dependency
S. K. A Manoj, S. Laxmandasu

Abstract

Conditional Functional Dependencies (CFDs) have been recently introduced in the context of data cleaning. They can be seen as an unification of Functional Dependencies (FD) and Association Rules (AR) since they allow to mix attributes and attribute/values in dependencies. Conditional Functional Dependencies (CFDs), for data cleaning purposes. CFDs are dependencies which hold on instances of the relations. Constraint used in CFDs is the equality and allows fixing particular constant values for attributes. Conditional Functional Dependencies (CFDs) have been proposed as a new type of semantic rules extended from traditional functional dependencies. They have shown great potential for detecting and repairing inconsistent data. The theoretical search space for the minimal set of CFDs is the set of minimal generators and their closures in data. This search space has been used in the currently most efficient constant CFD discovery algorithm. In this paper, we propose pruning criteria to further prune the theoretic search space, and design a fast algorithm for constant CFD discovery. We evaluate the proposed algorithm on a number of medium to large real world data sets. The proposed algorithm is faster than the currently most efficient constant CFD discovery algorithm, and has linear time performance in the size of a data set.
Full Paper

IJCST/32/6/
A-843
   235 Efficient Searching of Web Information using Ontology
R. Pushpa Latha, K. T. V. Subba Rao, M. M. Balakrishna

Abstract

Semantic search has been one of the motivations of the Semantic Web since it was envisioned. We propose a model for the exploitation of ontology based KBs to improve search over large document repositories. Our approach includes an ontology-based scheme for the semi-automatic annotation of documents, and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with keyword-based search to achieve tolerance to KB incompleteness. Our proposal is illustrated with sample experiments showing improvements with respect to keywordbased search, and providing ground for further research and discussion.
Full Paper

IJCST/32/6/
A-844
   236 Secure Cloud Computing using Linear Programming
S. K. A Manoj, P. Rajesh Kumar

Abstract

Cloud computing economically enables customers with limited computational resources to outsource large-scale computations to the cloud. However, how to protect customers’ confidential data involved in the computations then becomes a major security concern.Despite the tremendous benefits, security is the primary obstacle that prevents the wide adoption of this promising computing model, especially for customers when their confidential data are consumed and produced during the computation. Treating the cloud as an intrinsically insecure computing platform from the viewpoint of the cloud customers, we must design mechanisms that not only protect sensitive information by enabling computations with encrypted data, but also protect customers from malicious behaviors by enabling the validation of the computation result. also protect customers from malicious behaviors by enabling the validation of the computation result. In this paper, we present a secure outsourcing mechanism for solving large-scale systems of linear programming computation in cloud. In order to achieve practical efficiency, our mechanism design explicitly decomposes the LP computation outsourcing into public LP solvers running on the cloud and private LP parameters owned by the customer. In particular, by formulating private data ownedby the customer for LP problem as a set of matrices and vectors, we are able to develop a set of efficient privacy-preserving problem transformation techniques, which allow customers to transform original LP problem into some arbitrary one while protecting sensitive input/output information. To validate the computation result,we further explore the fundamental duality theorem of LP computation and derive the necessary and sufficient conditions that correct result must satisfy. Such result verification mechanism is extremely efficient and incurs close-to-zero additional cost on both cloud server and customers.
Full Paper

IJCST/32/6/
A-845
   237 Pruning Inconsistence Data using Conditional Functional Dependency
Prasanth Gunti, Ramesh Naidu G, Suresh babu P

Abstract

Conditional Functional Dependencies are designed for the detection and repairing of inconsistencies of data. Conditional Functional Dependencies (CFDs) have been proposed as a new type of semantic rules extended from traditional functional dependencies. They have shown great potential for detecting and repairing inconsistent data. The theoretical search space for the minimal set of CFDs is the set of minimal generators and their closures in data. This search space has been used in the currently most efficient constant CFD discovery algorithm. In this paper, we propose pruning criteria to further prune the theoretic search space, and design a fast algorithm for constant CFD discovery. We evaluate the proposed algorithm on a number of medium to large real world data sets. The proposed algorithm is faster than the currently most efficient constant CFD discovery algorithm, and has linear time performance in the size of a data set.
Full Paper

IJCST/32/6/
A-846
   238 A Superior Approach for Representing Spatial Data in Handheld Devices
CH. Swathi, P. S. G. Aruna Sri

Abstract

The spatiotemporal database systems requires the definition of suitable datasets simulating the typical behaviour of moving objects. Previous approaches for generating spatiotemporal data do not consider that moving objects often follow a given network. With the advance of wireless communication technology, it is quite common for people to view maps or get related services from the handheld devices, such as mobile phones and PDAs. Range queries, as one of the most commonly used tools, are often posed by the users to retrieve needful information from a spatial database. However, due to the limits of communication bandwidth and hardware power of handheld devices, displaying all the results of a range query on a handheld device is neither communication efficient nor informative to the users. This is simply because that there are often too many results returned from a range query. In view of this problem, we present a novel idea that a concise representation of a specified size for the range query results, while incurring minimal information loss, shall be computed and returned to the user. Such a concise range query not only reduces communication costs, but also offers better usability to the users, providing an opportunity for interactive exploration. The usefulness of the concise range queries is confirmed by comparing it with other possible alternatives, such as sampling and clustering. In one dimension, a simple dynamic programming algorithm finds the optimal solution in polynomial time. However, this problem becomes NP-hard in two dimensions. Then, we settle for efficient heuristic algorithms for the problem for two or higher dimensions. Our Proposed techniques effectiveness and efficiency examined on real-world data.
Full Paper

IJCST/32/6/
A-847
   239 Performance Evaluation of Random Walk with Differentiated Search Approach in Peer-to-Peer Network
B. Vinodhini

Abstract

A Peer-to-Peer network is a collection of heterogeneous distributed resources, which is connected by a network. It allows communication between two systems where each system is considered equal. Peers are different from each other in many aspects, like bandwidth, CPU power, and storage capacity. In this paper, we have implemented Flooding based approach, Differentiated search algorithm and the Random walk with differentiated search algorithm to improve the search efficiency of Unstructured P2P and the Comparison of these three methods are made based on the execution time.
Full Paper

IJCST/32/6/
A-848
   240 Framework for Efficient Edge Detection Techniques– Comparison Among Robert, Prewitt, Sobel, Robinson, Kirsch and Canny
B. Ramesh Naidu, P. Lakshman Rao, M. S. Prasad Babu, K. V. L Bhavani

Abstract

In this paper, we focused on the image processing techniques mainly image enhancement and edge detection. Edges are important features in an image since they represent significant local intensity changes. They provide important clues to separate regions within an object. Edge detection on an image significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. In this paper, we implemented edge detectors like Robert, first derivative, second derivative, Prewitt, Sobel, Robinson, Kirsch and Canny. In this paper the comparative analysis of various Image Edge Detection techniques is presented. The software is developed using Java 6.0.This paper provides various available techniques that are suggested by several authors with its merits and demerits. In this way, we say that, this study will help the researchers to develop better edge detection techniques.
Full Paper

IJCST/32/6/
A-849
   241 The Perfect Effective Navigation of Query Results Based on Concept Hierarchies
D. Prasad, Md Abdul Hai, K. Nagaraju

Abstract

Search queries on biomedical databases, such as PubMed, often return a large number of results, only a small subset of which is relevant to the user. Ranking and categorization, which can also be combined, have been proposed to alleviate this information overload problem. Results categorization for biomedical databases is the focus of this work. A natural way to organize biomedical citations is according to their MeSH annotations. MeSH is a comprehensive concept hierarchy used by PubMed. In this paper, we present the BioNav system, a novel search interface that enables the user to navigate large number of query results by organizing them using the MeSH concept hierarchy. First, the query results are organized into a navigation tree. At each node expansion step, BioNav reveals only a small subset of the concept nodes, selected such that the expected user navigation cost is minimized. In contrast, previous works expand the hierarchy in a predefined static manner, without navigation cost modeling. We show that the problem of selecting the best concepts to reveal at each node expansion is NP-complete and propose an efficient heuristic as well as a feasible optimal algorithm for relatively small trees. We show experimentally that BioNav outperforms state-ofthe- art categorization systems by up to an order of magnitude, with respect to the user navigation cost. BioNav for the MEDLINE database is available at http://db.cse.buffalo.edu/bionav.
Full Paper

IJCST/32/6/
A-850
   242 Efficient and Improving Dynamic Resource Allocation for Capable Parallel Data Processing in the Cloud Computing
Asmita H Ekhande, Hussain Khan, Fathima Zehra

Abstract

The perfect and efficient parallel data processing has to be one of the particular applications for Infrastructure-as-a- Service (IaaS) cloud computing. Major Cloud computing companies have started to total integrate frameworks for data processing in their specific product portfolio, has been made it easy for customers to processing services and have to deploy their no of programs. Therefore, the processing frameworks which can be presently used have been designed for static, homogeneous cluster setups and disregard the particular nature of a cloud. However, the allocated calculate resources may be inadequate for major parts of the submitted job and unnecessarily increase processing time and cost. In this paper discuss the opportunities and challenges for efficient parallel data processing in clouds and present our research project Nephele. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today’s IaaS clouds for both, task scheduling and execution. Particular tasks of a processing job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution. Based on this new framework, we should perform extended evaluations and inspired processing jobs on an IaaS cloud (cloud computing)system and observe the results to the popular data processing .
Full Paper

IJCST/32/6/
A-851
   243 A Perfect Distributed and New Scalable Time Slot Allocation Protocol for Wireless Sensor Networks
D. Prasad, Mohammed Zaheer Ahmed, K.Nagaraju

Abstract

There are performance deficiencies that hamper the deployment of Wireless Sensor Networks (WSNs) in critical monitoring applications. Such applications are characterized by considerable network load generated as a result of sensing some characteristics of the monitored system. Excessive packet collisions lead to packet losses and retransmissions, resulting in significant overhead costs and latency. In order to address this issue, we introduce a distributed and scalable scheduling access scheme that mitigates high data loss in data-intensive sensor networks and can also handle some mobility. Our approach alleviates transmission collisions by employing virtual grids that adopt Latin Squares characteristics to time slot assignments. We show that our algorithm derives conflict- free time slot allocation schedules without incurring global overhead in scheduling. Furthermore, we verify the effectiveness of our protocol by simulation experiments. The results demonstrate that our technique can efficiently handle sensor mobility with acceptable data loss, low packet delay, and low overhead.
Full Paper

IJCST/32/6/
A-852