IJCST Logo

 


International Journal of Computer Science and Technology Vol 5.4-1
Oct to Dec, 2014

S.No. Research Topic Paper ID Download
01

Text Processing Using Fuzzy Relational Clustering

J.Sakunthala Devi, G. Umamaheswara Rao, B. Kameswara Rao

Abstract
Fuzzy clustering algorithms let configurations to belong to all clusters with different degrees of membership. A novel fuzzy clustering algorithm that works on relational input data; i.e., data in the arrangement of a square matrix of pairwise resemblances among data objects. The algorithm uses a graph representation of the data and functions in an Expectation-Maximization framework in which the graph centrality of an entity in the graph is interpreted as a possibility. Results of relating the algorithm to sentence clustering errands determine that the algorithm is capable of categorizing coinciding clusters of semantically related sentences and that it is consequently of latent use in a variety of text mining tasks.
Full Paper
IJCST/54/4/A-0319
02

Issues and Effective Research on Energy Re-Gain in Underwater Wireless Sensor Network

Kunal Goel, Amit Bindal

Abstract
Underwater Sensor Networks (UWSN) were lately projected to sustenance moderately few of aquatic solicitations such as wharf observing and marine consideration. In energy-constraint underwater sensor network (UWSN) it is very vital to find tactic to mend the life expectancy of the sensor. Acoustic communication stereo typically directs the clout usage in underwater sensor networks. To deploy the vitality efficient network in underwater and reliable vitality resource is the foremost task in underwater sensor network. Energy efficient announcement is a key requisite of energy-constrained underwater sensor network (UWSN). In this paper we are providing the innovative energy efficient deployment technique and proposing a new technique to re-gain energy through the energy conversion mechanism, tidal energy into electrical energy which is reliable energy resource for the network to work for a long time. Full Paper
IJCST/54/4/A-0320
03

Catching of Cut With Network Failure

K.Sunil Babu, N.Renu, M.Srinivasa Rao

Abstract
Cut is nothing but a part of wireless sensor network which is split into different connected component because of some node failure in the network. This paper, propose a new algorithm to detect these cuts by the help of remaining nodes. This algorithm works in two different ways those are every node to detect a specially designated node has been lost, and one or more nodes that are connected to the special node after the cut to detect the occurrence of the cut.
The algorithm works based on a imaginative “electrical potential” of the nodes which is an iterative method and the convergence rate of the scheme is independent of the network. Full Paper
IJCST/54/4/A-0321
04

Effective Data Transmission Based on Enhanced Steganography & Cryptography

T V K P Prasad, T Srinivasa Rao, V.Dilip Kumar

Abstract
The traditional cryptographic algorithms both symmetric encryption and asymmetric encryption algorithms do the conversion of data into a cipher text for transmission over a public network. Most cryptography algorithms change the original plain text into a cipher text via an encryption algorithm and transmitted over the network. This cipher text is decrypted at the other end. Since the transmitted cipher text is visible in the network, it may be subjected to attacks. To overcome this problem, we implement a technique of hiding the message in the image file or in the video file in such a way that there would be no perceivable changes in the image or video after the message insertion. Sometimes the message is encrypted and hidden in the image or video file raises the level of security. This process of hiding message in the image or video file is called steganography. For this a new steganography technique called Joint Visual Cryptography (JVC) is implemented. This does not blur or change the image so that during transmission of image no one recognize the data, after receiving transmitted image receiver decrypt the original message from the image.
Full Paper
IJCST/54/4/A-0322
05

FoCUS: A Technique to Overcome Existing Crawl Methods

G.Bharathi, B.Poorna Satyanarayana

Abstract
The objective of FoCUS is to merely rummage appropriate forum content from the web with nominal overhead. Forum threads comprise information content that is the objective of forum crawlers. While forums have altered arrangements or styles and are power-driven by various forum software packages, they continuously have related implicit navigation lanes associated by precise URL types to lead users from entry pages to thread pages. Robust page type classifiers can be accomplished from as limited as 5 noted forums and applied to an enormous set of unseen forums.
Full Paper
IJCST/54/4/A-0323
06

A Framework for Load Rebalancing Algorithm in Cloud

Anumolu Anuradha, Amanatulla Mohammad, SAYEED YASIN

Abstract
Cloud computing is emerging technology where different users uses the resources dynamically. The number of users using the file systems is increasing day by day. Therefore distributed file systems are building blocks for cloud environment. When a client uploads a file it is partitioned into number of chunks to distinct nodes so that map reduces can be performed in parallel among the nodes. In addition to this, in cloud computing environment failure can occur, nodes can be replaced and/or added in the system.
Files can also be deleted or created. Therefore it could result in load imbalance problem. To overcome this problem, a fully distributed load rebalancing algorithm is proposed. Hence the objective is to allocate chunks of files as uniformly as possible among the nodes so that no node manages excessive number of chunks while reducing the movement cost. Each node in system performs the load rebalancing algorithm independently. Each node implements a gossip based aggregation protocol to collect the load status from different nodes. The file chunks which are stored in node get deleted dynamically. We migrate the file chunks to the previous node without randomly selecting node for migration.
Thereby we can reduce the movement cost and reallocate the file chunks uniformly among the nodes. This process is repeated until it reaches the last node in the system. Large scale distributed systems such as cloud computing applications are becoming very common. These applications come with increasing challenges on how to transfer and where to store and compute data. The most prevalent distributed file systems to deal with these challenges are the Hadoop File System (HDFS) which is a variant of the Google File System (GFS). However HDFS has two potential problems.
The first one is that it depends on a single name node to manage almost all operations of every data block in the file system. As a result it can be a bottleneck resource and a single point of failure. The second potential problem with HDFS is that it depends on TCP to transfer data. As has been cited in many studies TCP takes many rounds before it can send at the full capacity of the links in the cloud. This results in low link utilization and longer downloads times. To overcome these problems of HDFS we present a new distributed file system. Our scheme uses a light weight front end server to connect all requests with many name nodes. This helps distribute load of a single name node to many name nodes. Our second contribution is to use an efficient protocol to send and route data. Our protocol can achieve full link utilization and hence decreased download times. Based on simulation our protocol can outperform HDFS and hence GFS.
Full Paper
IJCST/54/4/A-0324
07

Blocking Implication Attacks on Social Network Private Information

Anjaneya Phani Chandra Sekhar, Sayyed Nagul Meera

Abstract
As the usage of social networking sites increased cause of many people like Facebook, twitter etc. These networks permit users to circulate details about their history and routine and uniting to their friends. Inside these networks some of the data exposed is doomed to be private or personal. Even though there is a chance that people could crack its private space with the help of algorithms on exposed data to guess undisclosed personal information. In this thesis we discover how to start implication attacks with the help of visible data in networking site to expect invisible confidential information about persons. We then plan three sanitization techniques which are possible that could be utilized in different situations. Then, we show the efficiency of these procedures by applying them on a dataset gained from the Facebook social networking application Dallas/Fort Worth, Texas network and trying to utilize collective inference methods to find out attributes which are sensitive in the data set. We explain that we can reduce the efficiency of both relational and local classification algorithms by utilizing the described sanitization methods. Additionally we find out a crisis domain where united inference degrades the presentation of classification algorithms for finalizing private attributes.
Full Paper
IJCST/54/4/A-0325
08

An Approach towards Removal of Undesirable Messages From Social Networks

V.Laxmi Narasamma, Shaik Naga Rehmathunnisa

Abstract
These days, social networking websites includes greatly extended the range of possible communications, permits us to distribute messages, pictures, and files and yet up to date information. The online social networks are generally supportive, and hold up social relations both online and offline, when the users are using them their information may be available to the people who want to mishandle it. The filtering rules develop user profiles, relationships of user in addition to the output of the process of machine learning categorization to position the filtering standard to be imposed. In content-based filtering, each user is supposed to function independently and as a result, the system of content-based filtering selects information items on the basis of the correlation connecting the items content and the preferences of the user. The architecture in support of services of online social networks is a structure of three-tier. The system makes available the maintenance intended for black lists of user-defined, specifically lists of users that are for the short term prevented to situate any category of messages on the wall of user.
Full Paper
IJCST/54/4/A-0326
09

Mona-An Influence: Vendor Locking in Meta Cloud

Kiran Kumar Kodali, Yenumala Sankara Rao

Abstract
Cost and scalability benefits of Cloud storage services are apparent. However, selecting a single storage service provider limits availability and scalability to the selected provider and may further cause a vendor lock-in effect. In this paper, we present Meta Storage, a federated Cloud storage system that can integrate diverse Cloud storage providers. Meta Storage is a highly available and scalable distributed hash table that replicates data on top of diverse storage services. Meta Storage reuses mechanisms from Amazon’s Dynamo for cross-provider replication and hence introduces a novel approach to manage consistency-latency tradeoffs by extending the traditional quorum (N; R; W) configurations to an (NP; R; W) scheme that includes different providers as an additional dimension. With Meta Storage, new means to control consistency-latency tradeoffs are introduced.
Full Paper
IJCST/54/4/A-0327
10

Privacy Preserving Over Online Social Network Based on Sensitive Data

Basava Ramanjaneyulu, Nagul Meera Sayyed

Abstract
Development of online social networks and publication of social network data has led to the risk of leakage of confidential information of particulars. Then this requires the preservation of privacy before such network data is published by service distributers. Privacy in online social networks data has been of utmost concern in past years. Then hence, the research in this field is still in its recent years. Then several published academic studies have proposed solutions for providing privacy of tabular small data. But those techniques cannot be straight forwardly applied to social network data as social network is a complex graphical structure of edges and vertices. Techniques like k-anonymity, its variables, L-diversity have been applied to social network content. Integrated technique of K-anonymity & L-diversity has also been developed to secure privacy of social network data in a better way.
Full Paper
IJCST/54/4/A-0328
11

Conceptualization-Based Query Responding For Dynamic RDF Database

Gudipalli Tejo Lakshmi, Syed Farzana

Abstract
Dynamic queries are a novel approach to information seeking that may enable users to cope with information overload. They allow users to see an overview of the database, rapidly (100 msec updates) explore and conveniently filter out unwanted information. Users fly through information spaces by incrementally adjusting a query (with sliders, buttons, and other filters) while continuously viewing the changing results. Dynamic queries on the chemical table of elements, computer directories, and a real estate database were built and tested in three separate exploratory experiments. These results show statistically significant performance improvements and user enthusiasm more commonly seen with video games.
Widespread application seems possible but research issues remain in database and display algorithms, and user interface design. Challenges include methods for rapidly displaying and changing many points, colors, and areas; multi-dimensional pointing; incorporation of sound and visual display techniques that increase user comprehension; and integration with existing database systems.
Full Paper
IJCST/54/4/A-0329
12

Providing Accountability and Security for Data in the Cloud

Madhira Srinivas, Asfia Imroze

Abstract
Cloud computing allows extremely scalable services to be simply consumed over the web on AN as-needed basis. A major feature of the cloud services is that users’ information square measure typically processed remotely in unknown machines that users don’t own or operate. While enjoying the convenience brought by this new rising technology, users’ fears of losing management of their own information (particularly, financial and health data) will become a big barrier to the wide adoption of cloud services. To handle this downside, in this paper, we tend to propose a completely unique extremely decentralized information accountability framework to stay track of the particular usage of the users’ data within the cloud. Particularly, we tend to propose AN objectcentered approach that permits insertion our work mechanism in conjunction with users’ information and policies. We tend to leverage the JAR programmable capabilities to each produce a dynamic and traveling object, and to make sure that any access to users’ information can trigger authentication and automatic work native to the JARs. To strengthen user’s management, we also provide distributed auditing mechanisms. We offer in depth experimental studies that demonstrate the potency and effectiveness of the planned approaches.
Full Paper
IJCST/54/4/A-0330
13

Blind Noise Estimation Using PCA And Denoising

Merin Mathew

Abstract
Noise is one of the classic problems and processes used in image processing work. Image noise is random variation of brightness or color information in images, and is usually an aspect of electronic noise. The problem of blind noise level estimation arises in may image processing applications, such as denoising, compression, and segmentation. In this paper we consider images corrupted with additive white Gaussian noise(AWGN) and propose a new noise level estimation method based on principal component analysis of image blocks. Then we denoising the estimated noise from the image using Gaussian filter. The noise variance can be estimated as the smallest Eigen value of the image block covariance matrix. The principal component analysis(PCA) of image blocks, which has been already successfully utilized in various image processing tasks such as compression, denoising and quality assessment. The old Methods are based on the assumption, that the pr ocessed image contains a sufficient amount of homogeneous areas. However, this is not always the case, since there are an image containing mostly textures.
Full Paper
IJCST/54/4/A-0331
14

Process Statement Informed Text by Napped Connected Using Clustering Techniques

A.KAVITHA, Y.VENKATALAKSHMI

Abstract
Sentence clustering plays an important role in many text processing activities. Irrespective of the specific task (e.g., summarization, text mining, etc.), most documents will contain interrelated topics or themes, and many sentences will be related to some degree to a number of these. The scale of these networks entails scalable learning of models for collective behavior prediction. To address the scalability issue, we propose an Spectral Clusteringclustering scheme to extract sparse social dimensions. For mining different associations of mining behavioral features like user activities and temporal spetial information collected from different social media, and integrates them with social networking information to improve prediction performance. To integrate these sources of information, it is necessary to identify individuals across social media sites. It consists of three key components: The first component identifies users’ unique behavioral patterns that lead to information redundancies across sites; the second component constructs features that exploit information redundancies due to these behavioral patterns; and the third component employs machine learning for effective user identification.
Full Paper
IJCST/54/4/A-0332
15

Comparative Study of Different Data Classification Techniques: A Review

Er. Minaxi Arora, Er. Lekha Bhambhu

Abstract
Classification is the most important task which is being used in various applications today. It is the mapping of data into predefined groups and classes. In machine learning, classification belongs to the act of identifying to which set of categories a given observation belongs. In practice, many classification techniques are available such as Decision Trees, KNN, SVM, rough set theory etc. In this paper, we have compared the performance of various classification techniques.
Full Paper
IJCST/54/4/A-0333
16

Forum Crawler Under Supervision on Navigation Paths Connected

CH S R Swarajya Sree, B.Kameswara Rao

Abstract
In this paper, we present Forum Crawler Under Supervision (FoCUS), a supervised web-scale forum crawler. The goal of FoCUS is to crawl relevant forum content from the web with minimal overhead. Forum threads contain information content that is the target of forum crawlers. Although forums have different layouts or styles and are powered by different forum software packages, they always have similar implicit navigation paths connected by specific URL types to lead users from entry pages to thread pages. Based on this observation, we reduce the web forum crawling problem to a URL-type recognition problem. And we show how to learn accurate and effective regular expression patterns of implicit navigation paths from automatically created training sets using aggregated results from weak page type classifiers. Robust page type classifiers can be trained from as few as five annotated forums and applied to a large set of unseen forums.
Full Paper
IJCST/54/4/A-0334
17

Spatial Keyword and Approximate String Search

Danaboina Venkata Padmavathi

Abstract
Geo-textual indices play an important role in spatial keyword querying. The existing geo-textual indices have not been compared systematically under the same experimental framework. This makes it difficult to determine which indexing technique best supports specific functionality. We provide an all-around survey of 12 state-of-the-art geo-textual indices. This work presents a novel index structure, MHR tree, for efficiently answering approximate string match queries in large spatial databases.
The MHR-tree is based on the R-tree augmented with the minwise signature and the linear hashing technique. The min-wise signature for an index node u keeps a concise representation of the union of q-grams from strings under the sub-tree of u. We analyze the pruning functionality of such signatures based on set resemblance between the query string and the q-grams from the sub-trees of index nodes. MHR-tree supports a wide range of query predicates efficiently, including range and nearest neighbor queries. We also discuss how to estimate range query selectivity accurately. We present a novel adaptive algorithm for finding balanced partitions using both the spatial and string information stored in the tree. Extensive experiments on large real data sets demonstrate the efficiency and effectiveness of our approach. And also we propose a benchmark that enables the comparison of the spatial keyword query performance. We also report on the findings obtained when applying the bench- mark to the indices, thus uncovering new insights that may guide index selection as well as further research.
Full Paper
IJCST/54/4/A-0335
18

Content Redistribution for Invitigating Price and Video Streaming in Mobile Networks

Syam Babu Badugu, Akbar Khan, Sayeed Yasin

Abstract
Mobile Ad-Hoc networks are prone by nature to path breaks and reconnections. Routing protocols such as OLSR that provide topology information with a minimum delay to the quick reconfiguration of path breaks are desired. Applications and protocols should be able to adapt to the dynamics of these networks. However, this is not true for most applications and protocols. For example, a video streaming server does not become aware of the path break because it does not interact with lower layers that inform about this event. As a result, important resources such as battery and bandwidth are not used efficiently, and TCP connection failures happen frequently. In this paper, we present a proxy based solution to detect path breaks and reconnections using the proactive OLSR protocol for ongoing streaming sessions and we do corrective actions at the application layer to lead a more efficient usage of the bandwidth during disconnections.
Full Paper
IJCST/54/4/A-0336
19

To Determine Prediction of Behavioral Structures through Group Sorting

Cherukuri Swathi

Abstract
Very well know that the complexity and volume of the data is increasing rapidly in some Crowd sourcing websites. The term Crowd sourcing means the action of outsourcing chores, traditionalistic performed by an utilize or contractor, which are now performed by a large group of people. It is more expensive and more time consuming process because of increase in rate of submission and so short listing the winners.
Data submitted by crowd sourcing websites can be noisy, inconsistent. To overcome this problems related to data one of the method was proposed which named as text mining method; this method performs the number of operations like | descent of data, procedure, tf-idf computing and calculation of similarity. Results obtained by existing system shows that k-means algorithm with text mining methods do not do the entire trick of evaluating submissions.
Therefore proposed system uses hierarchic clustering algorithm with text mining methods and classification for relation submission to overcome the problems present in the existing system.
Full Paper
IJCST/54/4/A-0337
20

Classification of Uncertain Data using Uncertain Decision Tree-Cumulative Distribution Function

Rudra Krishna Srija, K. Shirin Bhanu, G. Loshma

Abstract
Traditional decision tree classifiers work with data whose values are known and precise. We extend such classifiers to handle data with uncertain information. Value uncertainty arises in many applications during the data collection process like multiple repeated measurements. With uncertainty, the value of a data item is often represented not by one single value, but by multiple values forming a probability distribution. Rather than abstracting uncertain data by statistical derivatives (such as mean and median), we discover that the accuracy of a decision tree classifier can be much improved if the “complete information” of a data item (taking into account the probability density function (PDF)) is utilized.
Extensive experiment results shown that the resulting classifiers are more accurate than those using value averages. Since processing PDF’s is computationally more costly than processing single values (e.g., averages), decision tree construction on uncertain data is more CPU demanding than that for certain data, To tackle this pruning techniques are used. We extend classical decision tree building algorithms to handle data tuples with uncertain values using Uncertain Decision Tree -Cumulative Distributive Function (UDT-CDF).
Full Paper
IJCST/54/4/A-0338
21

Discovery of Sequence and Subsequence Pattern in Sequence Datasets by Using Scalable Parallelizable Induction of Decision Tree Algorithm

Kandregula VaraVenkata Siva Prasad, V. Venkateswara Rao, G. Loshma

Abstract
Sequential pattern mining is the mining of frequentlyoccurring ordered events or subsequences as patterns. An example of a sequentialpattern is “Customers who buy a Canon digital camera are likely to buy an HP colorprinter within a month.”Many kinds of Sequential patterns can bemined fromdifferent kinds of Sequencedata sets. Sequential dataset corresponds to the contents of a single database table, or a single statistical data matrix. Existing sequence mining algorithms mostly focus on mining for subsequences. However, a large class of applications, such as biological DNA and protein motif mining, require efficient mining of “approximate” patterns that are contiguous. The fewexisting algorithms that can be applied to find such contiguous approximate pattern mining have drawbacks like poor scalability, lack of guarantees in finding the pattern, and difficulty in adapting to other applications.In this system, we present a new algorithm called SPRINT to find Sequence Pattern and Subsequence pattern efficiently and Eliminating Problem of Poor Scalability lack of Guaranties in finding Pattern.SPRINT is a Decision tree based Parallelized Algorithm.It is also accurate, as it always finds the patterns parallel if it exists. Using both real and synthetic data sets, we demonstrate that SPRINT is fast, scalable, and outperforms existing SPRINT algorithms on a variety of performance metrics.
In addition, based on SPRINT, we also address a more general problem, named extended structured motif extraction, which allows mining frequent combinations of motifs under relaxed constraints. We propose to compare SPRINT with FLAME, which is Suffix tree based Algorithm. We proposed to Evaluate these algorithm using various bench mark data sets.
Full Paper
IJCST/54/4/A-0339
22

Location Updation for Geographic Routing in Mobile Ad-hoc Networks

A Gopala Krishna, B.Naresh Kumar, Dr. K.Raghavarao

Abstract
In geographic routing, nodes got to maintain up-to-date positions of their immediate neighbors for creating effective forwarding selections. Periodic broadcasting of beacon packets that contain the geographic location coordinates of the nodes may be a popular technique employed by most geographic routing protocols to take care of neighbor positions. We tend to contend and demonstrate that periodic beaconing notwithstanding the node quality and traffic patterns within the network isn’t engaging from each update price and routing performance points of read. We tend to propose the adaptive Position Update (APU) strategy for geographic routing, that dynamically adjusts the frequency of position updates supported the quality dynamics of the nodes and also the forwarding patterns within the network. APU is based on 2 straightforward principles: 1) nodes whose movements are more durable to predict update their positions additional oftentimes (and vice versa), and (ii) nodes nearer to forwarding ways update their positions additional oftentimes (and vice versa). Our theoretical analysis, which is valid by NS2 simulations of a widely known geographic routing protocol, Greedy Perimeter homeless Routing Protocol (GPSR), shows that APU will considerably cut back the update price and improve the routing performance in terms of packet delivery magnitude relation and average end-to-end delay compared with periodic beaconing and alternative recently planned change schemes. The advantages of APU are any confirmed by endeavor evaluations in realistic network eventualities, that account for localization error, realistic radio propagation, and thin network.
Full Paper
IJCST/54/4/A-0340
23

Efficiently Reducing Routing Overheads – Rebroadcast Technique

T. Prudhvi Kumar, B.Naresh Kumar, Dr. K.Raghavarao

Abstract
Mobile Ad-Hoc Networks provides important control and route establishment functionality for a number of unicast and multicast protocols. To discover an effective and an efficient routing protocol for transmit information from source to destination across whole network topology. This is a main issue in networking research.
Broadcasting is important in MANET for routing information discovery, protocols such as ad hoc on demand distance vector (AODV), dynamic source routing (DSR), and location aided routing use broadcasting to establish routes. Broadcasting in MANETs poses more challenging problems because of the variable and unpredictable characteristics of its medium as well as the fluctuation of the signal strength and propagation with respect to time and environment such as bandwidth congestion, channel contention problem, and packet collision problem. To overcome this problem We propose neighbor coverage based probabilistic rebroadcast protocol for reducing routing overhead in MANETs. In order to effectively exploit the neighbor coverage knowledge, we propose a novel rebroadcast delay to determine the rebroadcast order, and then we can obtain the more accurate additional coverage ratio by sensing neighbor coverage knowledge.
We also define a connectivity factor to provide the node density adaptation. This approach can significantly decrease the number of retransmissions so as to reduce the routing overhead and also improve the routing performance. Thus finding the neighborhood node, we use channel awareness mechanism for data transmission and to improve the quality.
Full Paper
IJCST/54/4/A-0341
24

Investigation of Class Specific Feature Selection Methods for Class Imbalance Problems

S.Chinna Gopi, K.Karuna Sree, S.Usha Kiran

Abstract
Class imbalance problem is attracting more attention of the researchers in Machine learning community. Concern with this problem a wide research is underwent in classification point of view. There is little work is available on feature selection algorithms. In this project we are focusing on the viability of the SVM feature selection algorithm on unbalanced datasets.
Full Paper
IJCST/54/4/A-0342
25

Mobile Data-Gathering Mechanisms in Wireless Sensor Networks Using Multiple Mobile Collectors

Fasna Selim

Abstract
A new data-gathering mechanism is developed for large-scale wireless sensor networks by introducing mobility into the network.
A mobile data collector could be a mobile robot or a vehicle equipped with a powerful transceiver and battery, working like a mobile base station and gathering data while moving through the field. An M-collector starts the data-gathering tour periodically from the static data sink, polls each sensor while traversing its transmission range, then directly collects data from the sensor in single-hop communications, and finally transports the data to the static sink. Since data packets are directly gathered without relays and collisions, the lifetime of sensors is expected to be prolonged. The main focus is on minimizing the length of each data-gathering tour. For the applications with strict distance/ time constraints, utilizing multiple M-collectors and propose a datagathering algorithm where multiple M-collectors traverse through several shorter sub-tours concurrently to satisfy the distance/ time constraints. It has been known that data routing can cost significant energy expenditure in sensor networks with a flat topology. To overcome this problem, a hierarchy to the network have been introduced. To make a data-collecting scheme suitable to various network topologies, it is more realistic and efficient to plan the moving of the mobile observers dynamically based on the distribution of sensors. Single-hop mobile data-gathering scheme can improve the scalability and balance the energy consumption among sensors. It can be used in both connected and disconnected networks. The proposed data-gathering algorithm can shorten the moving distance of the collectors and is close to the optimal algorithm for small networks.
Full Paper
IJCST/54/4/A-0343
26

Detection and Removal Algorithm for IPv6 Type 0 Routing Header Vulnerability

Amit Singh

Abstract
In recent years network security has become an important issue. Cryptography has been used to secure data and control access by sharing a private cryptographic key over different devices.
Cryptography renders the message unintelligible to outside by various transformations Data Cryptography is the scrambling of the content of data like text,image,audio and video to make it unreadable or unintelligible during transmission.Its main goal is to keep the data secure from unauthorized access.
Full Paper
IJCST/54/4/A-0344
27

Cluster Based Shortcut Routing in Zigbee Network

Jesna N Jayasenan

Abstract
Many resource limited applications and devices uses the Zigbee tree routing, as it does not require any routing table and route discovery overhead to send a packet to the destination. The fundamental limitation of Zigbee tree routing is that the packet follows the tree topology, thus optimal routing path is not provided by this method. The limited tree links causes the traffic concentration problem in ZTR. To overcome these problems, a clustering scheme is introduced into Zigbee network. It uses clustering’s structure to decrease average end-to-end delay and improve the average packet delivery ratio. Clustering divides the network into interconnected substructures. Each cluster heads within these substructures temporarily acts as base stations in its zone and communicates with other substructures or clusters. For the data packets to transfer, it also uses a neighbor table information to find the optimal short distance to the destination. In this method the routing is also done quickly and the error tolerance increased.
This is because the address of the cluster determines the routing. The failure of any node in the route is detected by the cluster head and it uses an alternate node to forward the packet and thus error tolerance is enhanced.
Full Paper
IJCST/54/4/A-0345
28

Anomaly Detection Using Principal Component Analysis

Adathakula Sree Deepthi, Dr. K.Venkata Rao

Abstract
Anomaly detection is the identification of items, events or observations which do not conform to an expected pattern or other items in a dataset. Typically the anomalous items will translate to some kind of problem such as bank fraud, a structural defect, medical problems or finding errors in text. Anomalies are also referred to as outliers, novelties, noise, deviations and exceptions. Many techniques employed for detecting outliers are fundamentally identical but with different names chosen by the authors. In the most general case, an anomaly detector can detect deviations from an established baseline profile that characterizes normal behavior. Anomaly detection finds extensive use in a wide variety of applications such as fraud detection for credit cards, insurance or health care, intrusion detection for cyber-security, fault detection in safety critical systems, and military surveillance for enemy activities. The importance of anomaly detection is due to the fact that anomalies in data translate to significant actionable information in a wide variety of application domains. Principal Component Analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables. In this paper we discuss various anomaly detection techniques and their merits and demerits.
Full Paper
IJCST/54/4/A-0346
29

Cloud Storage- Security and Privacy

Mula Aswini, Sri. P.Suresh Babu

Abstract
The Cloud has become popular for delivering resources such as computing and storage to customers on demand. Cloud Storage is a service where data is remotely maintained, managed, and backed up. The service is available to users over a network, which is usually the internet. Users should also be aware that backing up their data is still required when using cloud storage services, because recovering data from cloud storage is much slower than local backup. Although there are downsides to cloud storage, many organizations believe the benefits to far outweigh the risks.
The cost savings, disaster-recovery, security, and accessibility are just a few intriguing benefits to businesses. Cloud storage can reduce costs, simplify IT management, improve user experience, and allow employees to work and collaborate from remote locations. This simplifies sharing and collaboration among staff, and easing IT logistics as a whole. As enterprises make plans to deploy applications in private and public cloud environments, new security challenges need to be addressed. Optimal cloud security practices should include encryption of sensitive data used by cloud-based virtual machines; centralized key management that allows the user to control cloud data; and ensuring that cloud data is accessible according to established enterprise policies.
User’s personal data may be scattered in various virtual data center rather than stay in the same physical location, even across the national borders. At this time, data privacy protection will face the controversy of different legal systems. On the other hand, users may leak hidden information when they accessing cloud computing services. Attackers can analyze the critical task depending on the computing task submitted by the users.
Full Paper
IJCST/54/4/A-0347
30

Iterative Duplicate Detection in XML Data

G.Ravikumar, Medara Rambabu

Abstract
Duplicate detection consists in detecting multiple representations of a same real-world object, and that for every object represented in a data source. Duplicate detection is relevant in data cleaning and data integration applications and has been studied extensively for relational data describing a single type of object in a single table. This paper focuses on iterative duplicate detection in XML data. We consider detecting duplicates in multiple types of objects related to each other and devise methods adapted to semi-structured XML data. Relationships between different types of objects either form a hierarchical structure or a graph structure. Iterative duplicate detection require a similarity measure to compare pairs of object representations, called candidates, based on descriptive information of a candidate. The distinction between candidates and their description is not straightforward in XML, but we show that we can semi-automatically determine these descriptions using heuristics and conditions.
Full Paper
IJCST/54/4/A-0348