IJCST Logo

 

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY (IJCST)-VOL IV ISSUE III, VER. 2, JULY. TO SEPT, 2013


International Journal of Computer Science and Technology Vol. 4 Issue 3, Ver. 2
S.No. Research Topic Paper ID
32 EBSO: Network Lifespan Enhancement Using Elephant Based Swarm Optimization in Wireless Sensor Network

Chandramouli H, Dr. Somashekhar C Desai

Abstract

The robust and complex real-time applications and dramatically increased sensor capabilities may play a vital role in enhancing the lifespan of WSNs. On the other hand majority of WSNs operate on battery powered infrastructure, therefore in order to enhance the life time or for life time maximization a robust and highly efficient protocols are required to be developed that can effectively reduce the battery utilization and the overall computational as well as interaction complexity could be reduced. Optimizations need to be adopted at the Routing Layer, MAC Layer and the Radio Layer of the wireless sensor node. Cross-layered approach can play a significant role in improving network characteristics like QoS, lifespan as well as throughput. The behavior of large elephant swarms motivates for incorporating their behavior into wireless sensor networks and which is incorporated using a crosslayer approach. In this paper in order to achieve a better network performance an approach of elephant swarm optimization has been implemented that enables optimization of routing algorithm, adaptive radio link optimization and balanced TDMAMAC scheduling. The proposed Elephant Based Swarm Optimization approach is compared with the popular LEACH and PSO Protocols and results proves that the EBSO technique is the best among LEACH and PSO.
Full Paper

IJCST/43/2/C-1624
33 Performance Behavior of AODV and a New Optimised Technique of Load Balancing in Vehicular Network

Dr. R.K.Chauhan, Arzoo Dahiya

Abstract

In the field of networking and communication, one of the most attractive research topic is inter vehicle communication i.e. realization of mobile adhoc network. A rich literature is available in vehicular networks to explore the special characteristics of VANET but all the protocols are majorly geography based. It has some unique characteristics which make it different from other mobile adhoc routing protocols so it becomes a crucial and challenging task to design a routing protocol that meet each and every requirement of road safety. In this research paper, the performance of on-demand routing protocols AODV has been analysed by means of packet delivery ratio,end-to-end delay,packet loss ratio and normalised routing load with varying speed under TCP connections and based on its behavior a new technique is introduced to balance the load in a network so that data is delivered to the destination vehicle in time which is utmost requirement to avoid accidents on road.
Full Paper

IJCST/43/2/C-1625
34 Performance Evaluation of Routing Protocols in MANET

Nitish Jindal, Dr. Himanshu Aggarwal

Abstract

Mobile Ad-Hoc Networks (MANET) have opened a new dimension in wireless networks. It allows mobile nodes to communicate in absence of centralized support. It doesn’t always follow any fixed infrastructure due to high mobility of nodes and multipath propagations. It is highly deployable, self configurable and has dynamically changing topologies. MANET protocols have to face high challenges due to dynamically changing of topologies, low transmission power and asymmetric links. Due to link instability, node mobility and frequently changing topologies routing becomes one of the core issues in MANETs. A suitable and effective routing mechanism helps to extend the successful deployment of MANETs. Currently existent routing protocols provide routing solutions up to a certain level and most of them are designed and implemented in small areas. Many researchers are still working on the developments of MANET routing protocols. In this paper, we are going to concentrate on the performance of various routing protocols on Average end to end delay and throughput using Qualnet simulator under DYMO, OLSR and ZRP protocols.
Full Paper

IJCST/43/2/C-1626
35 Distributed Database Management System and Query Processing

Jayashree

Abstract

This paper presents an introduction to distributed database design through a study that targeted two main parts: in the first part presents an exploration of the fundamentals of DDBS, and the alternatives of their design. These alternatives addressed issues such as, architecture, distribution, concurrency control, etc. In the second part Query processing in a distributed system, that requires the transmission of data between computers in a network. The arrangement of data transmissions and local data processing is known as a distribution strategy for a query. The optimal algorithms are used as a basis to develop a general query processing algorithm.
Full Paper

IJCST/43/2/C-1627
36 An Extended VPS Based on ARM

Dileep Gatla, S.Sandyarani

Abstract

The paper discusses about the VPS (Vehicle positioning system) based on the ARM processor design with an extension of the future location along with the Explores location solution, map, matching and data compress that associated with the positioning, shows the flowchart of the VPS .The VPS system id designed by using the mixture of the technologies of the both GSM and Global positioning system in order to get position of the system in the present time and in the future also.
Full Paper

IJCST/43/2/C-1628
37 Performance of Data Cleaning Techniques by Using C-Tane Algorithm

B.Revanth, P.Naga Raju, M.Durga Satish

Abstract

Conditional Functional Dependencies (CFDs) are an extension of Functional Dependencies (FDs) by supporting patterns of semantically related constants, and can be used as rules for cleaning relational data. However, finding CFDs is an expensive process that involves intensive manual effort. To effectively identify data cleaning rules, we take 4 techniques for cleaning the data from sample relations. CFDMiner, is based on techniques for mining closed item sets, and is used to detect constant CFDs, namely, CFDs with constant patterns only. It provides a heuristic efficient algorithm for discovering patterns from a fixed FD. It leverages closed-item set mining to reduce search space. CTANE works well when the arity of a sample relation is small and the support threshold is high, but it scales poorly when the arity of a relation increases. FastCFD is more efficient when the arity of a relation is large. Greedy Method formally based on the desirable properties of support and confidence. It studying the computational complexity of automatic generation of optimal tables and providing an efficient approximation algorithm. These techniques are already implemented in the previous papers. We take algorithms of these 4 techniques and find out time and space complexity of each algorithm to know which technique will be helpful in which case and display the results in the form of line and bar charts.
Full Paper

IJCST/43/2/C-1629
38 Learning Data: Intrusion Detection

R Kiran Kumar, Pilla Sita Rama Murty, M.V. Durga Rao, D. John Subuddhi

Abstract

During the last two decades the research community is extensively advising the use of Machine Learning systems for Network Security (anomaly detection). This paper concentrates on the design procedure of machine learning systems, explaining the basic terminology, also specifying the procedures on creating the training and test datasets, on choosing among several performance measures available. Experiments comparing the state of the art machine learning algorithms, by taking ROC as the performance measure are conducted and results were given. The algorithms compared are Adaboost, Bagging, KNN, SVM and MLP. For the experiments KDD 99 Intrusion Dataset and Email spam database are used. Before concluding, several kinds of attacks aimed at machine learning systems are specified.
Full Paper

IJCST/43/2/C-1630
39 A Secure Scalable Intruder Detection System in Mobile Ad-Hoc Networks

C. Kalpana, Guru. Kesava Dasu

Abstract

The notion of an ad hoc network is a new paradigm that allows mobile hosts (nodes) to communicate without relying on a predefined infrastructure to keep the network connected. Most nodes are assumed to be mobile and communication is assumed to be wireless. The mobility of nodes in an ad-hoc network means that both the population and the topology of the network are highly dynamic. It is very difficult to design a once-for-all intrusion detection system. A secure protocol should atleast include mechanisms against known attack types. In addition, it should provide a scheme to easily add new security features in the future. The paper includes the detailed description of Proposed Intrusion Detection System based on Local Reputation Scheme. The proposed System also includes concept of Redemption and Fading these are mechanism that allow nodes previously considered malicious to become a part of the network again. The simulation of the proposed system is to be done using NS-2 simulator.
Full Paper

IJCST/43/2/C-1631
40 Load Balancer for SIP Server Clusters

Kilari Srinivasa Vara Prasad, Gumamahesh

Abstract

Load-balancing algorithms are introduced for distributing Session Initiation Protocol (SIP) which requests to a cluster of SIP servers for different purpose. Here the algorithm used is enhancing both in throughput and response time to different user. This paper builds a prototype of our system using Linux operating system. We have used Transaction Least-Work-Left (TLWL)algorithm has got a leading role in and recognizing variability in different parameter in processing costs for different SIP transactions. By combining different features of different algorithm, and provides a good response-time improvements as well we present a detailed analysis to show how our algorithms significantly reduce response time.
Full Paper

IJCST/43/2/C-1632
41 Efficiently Identifying Cuts in Wireless Sensor Network

K.Umakumari, J.Mahalakshmi

Abstract

In this Paper, We propose a distributed algorithm that allows every node to monitor the topology of the (initially connected) graph and detect if a cut occurs. For reasons that will be clear soon, one node of the network is denoted as the “source node”. The algorithm consists of every node updating a local state periodically by communicating with its nearest neighbors. The state of a node converges to a positive value in the absence of a cut. If a node is rendered disconnected from the source as a result of a cut, its state converges to 0. By monitoring its state, therefore, a node can determine if it has been separated from the source node. In addition, the nodes that are still connected to the source are able to detect that, one, a cut has occurred somewhere in the network, and two, they are still connected to the source node. The algorithm is iterative, a faster convergence rate is desirable for it to be effective. The convergence rate of the proposed algorithm is not only quite fast, but is independent of the size of the network. As a result, the delay between the occurrence of a cut and its detection by all the nodes can be made independent of the size of the network. This last feature makes the algorithm highly scalable to large sensor networks.
Full Paper

IJCST/43/2/C-1633
42 Detecting Misbehaving Forwarders in WSN

P V S Kaumudhi, SK Shafiulilah

Abstract

Providing security in WSN is became a challenge to the researchers. Mostly the problem like Packet dropping and modification are very frequently occurring problem in wireless sensor network which create interrupt in communication. There are many method proposed to overcome this problem but a few has been succeed till now. In this paper we proposed an effective scheme which can successfully identify the attacker or intruder who try to drop or modify packets. We have done sufficient simulation and experiment to shows the efficiency of the scheme.
Full Paper

IJCST/43/2/C-1634
43 Rapid Execution of Database Queries for Reducing I/O and CPU Cost

Venkata Siva Rao.Alapati, Nagul Meeravali.Sayeed

Abstract

Metric databases are databases where a metric distance function is defined for pairs of database objects. In such databases, similarity queries in the form of range queries or k-nearest neighbor queries are the most important queries. In traditional query processing, single queries are issued independently by different users. In many data mining applications, however, the database is typically explored by iteratively asking similarity queries for answers of previous similarity queries. In this paper, we introduce a generic scheme for such data mining algorithms and we develop a method to transform such algorithms in a way that they can use multiple similarity queries, i.e. sets of queries issued simultaneously. We investigate two orthogonal approaches, reducing I/O cost as well as CPU cost, to speed-up the processing of multiple similarity queries. The proposed techniques apply to any type of similarity query and to an implementation based on an index or using a sequential scan. Parallelization yields an additional impressive speed-up.
Full Paper

IJCST/43/2/C-1635
44 Capacity Optimization in Wireless Ad-Hoc Network

L Ruth Priya, B Bangar Naidu

Abstract

Cooperative communication as became most important research topic for the people in wireless network. Most of the existing work focused on the data link layer and physical layer where as ignorance of other layer and network topology has been observed. In this paper, we increased the network capacity in MANET by considering the network and physical layer by proposing a new scheme called Capacity-Optimized Cooperative (COCO) topology. Through simulations, we have shown that physical layer cooperative communications have a good impacts on the network capacity, and substantially improve the network capacity in MANETs with cooperative communications.
Full Paper

IJCST/43/2/C-1636
45 Lossy Difference Aggregator in Routers for Fine-Grained Latency

G.Samrat Krishna, Dr. P. Bala Krishna Prasad, G.Guru Kesava Dasu

Abstract

Detecting and localizing latency-related problems at router and switch levels is an important task to network operators as latencycritical applications in a data center network become popular. The resulting fine-grained measurement demands cannot be met effectively by existing technologies, such as SNMP, NetFlow, or active probing. Instrumenting routers with a hash-based primitive has been proposed that called as Lossy Difference Aggregator (LDA) to measure latencies down to tens of microseconds even in the presence of packet loss. Because LDA does not modify or encapsulate the packet, it can be deployed incrementally without changes along the forwarding path. When compared to Poissonspaced active probing with similar overheads, LDA mechanism delivers orders of magnitude smaller relative error. To measure the latency accurately, to propose a method, LDA (Lossy Difference Aggregator), which accurately measures loss and delay over short timescales while providing strong bounds.
Full Paper

IJCST/43/2/C-1637
46 Mining of XML Queries With TAR approach

P S L Sravani, Venkata Ramana Adari

Abstract

Information Extraction from semi-structured documents became a very tough task because of the availability of huge digital information available on the Internet. Due to the large size of the document the query may be too big to convey interpretable knowledge. In this paper, we describe a new approach called Tree- Based Association Rules (TARs), which provide information on both the structure and the contents of Extensible Markup Language (XML) documents, and can be stored in XML format as well. This provides quick, approximate answers to queries. Furthermore this paper focuses on prototype system and experimental results which is shown and discussed effectively.
Full Paper

IJCST/43/2/C-1638
47 Secure and Reliable Quality of Data Storage in Cloud Computingh

Uma Sankar. Yabaji, Chinna Babu Galinki

Abstract

Cloud computing has gained a lot of hype in the current world of I.T. Cloud computing is said to be the next big thing in the computer world after the internet. Cloud computing is the use of the Internet for the tasks performed on the computer and it is visualized as the next-generation architecture of IT Enterprise. The ‘Cloud’ represents the internet. Cloud computing is related to several technologies and the convergence of various technologies has emerged to be called cloud computing. In comparison to conventional ways Cloud Computing moves application software and databases to the large data centers, where the data and services will not be fully trustworthy. In this article, I focus on secure data storage in cloud; it is an important aspect of Quality of Service. To ensure the correctness of users’ data in the cloud, I propose an effectual and adaptable scheme with salient qualities. This scheme achieves the data storage correctness, allow the authenticated user to access the data and data error localization, i.e., the identification of misbehaving servers.
Full Paper

IJCST/43/2/C-1639
48 Packet Logging Schemes for IP Tracking

Sri Roja Rani Thota, Darapu Uma, Phani Ratna Sri Redipalli

Abstract

In these days the Internet has became popular and applied in various fields which gives importance in security issues and caught the people’s attention a lot. However, the hackers hide their own IP address by pretending to be original and launch attack. Due to this reason, researchers are concentrating on the schemes to trace the source of these attacks. This paper proposes a new hybrid IP trace back system with efficient packet logging to have a fixed storage requirement for each router without refreshing the logged tracking information and to reconstruct the attack path. It used 16bit marking field which avoids packet fragmentation problem. We simulate and analyze our scheme by many experiment and compare the result with many existing scheme developed by many researchers in different aspects like storage requirement and accuracy.
Full Paper

IJCST/43/2/C-1640
49 Protected Scalable TPA in Cloud Computing

Raja Rajeswari Jetti, G.Padmarao

Abstract

Cloud computing is the newest term for the ong-dreamed vision of computing as a utility. The cloud provides convenient, on-demand network access to a centralized pool of configurable computing resources that can be rapidly deployed with great efficiency and minimal management overhead. The industry leaders and customers have wide-ranging expectations for cloud computing in which security concerns remain a major aspect. Actually the application software and databases to the centralized large data centers in a cloud .The management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a Third Party Auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud is indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously.
Full Paper

IJCST/43/2/C-1641
50 Fast Converge Cast in Wireless Sensor Networks

M.V.B.Murali Krishna.M, Darapu Uma, Phani Ratna Sri Redipalli

Abstract

In this advanced and fast world people do not want to wait much for collecting information. Hence now a day’s collecting information in a faster way became a challenge for the researchers. Faster data collection in WSN with a tree based routing protocol has been discussed in this work. First of all we consider the time scheduling on a single frequency channel to minimizing the number of time slots and then we combine scheduling with transmission power control to overcome the effects of interference. This is done under a single frequency channel but with multiple frequencies it is more efficient. We also evaluate the performance of various channel assignment methods, the use of multi-frequency scheduling can sufficient to eliminate most of the interference. Then, the data collection rate increases by reducing the interference by the topology of the routing tree.
Full Paper

IJCST/43/2/C-1642
51 SPOT: Detecting Compromised Machines in Internet

M.Janakiramarao, Darapuuma, R.Phaniratnasri

Abstract

Machines used by crackers are the main security threats on the web. They launch various attacks to spread the malware and DDoS. The system used by the crackers called as compromised system and are involved in the spamming activities, commonly known as spam zombies. In this paper, we discuss a system called SPOT which is used for spam zombie detection considering the outgoing messages of a network. SPOT is designed using Sequential Probability Ratio Test, which is a statistical tool based on false positive and false negative error rates. FireCol is act as Intrusion Prevention Systems (IPSs) located at the Internet service providers. The IPSs creates virtual protection rings to protect selected traffic information. We also concentrated on the performance of SPOT system using e-mail tracing method. Our experiment shows that SPOT is an effective and efficient system in detecting spam zombie in the network. We also compare the performance of SPOT with other method and concluded that SPOT is more efficient than the existing one.
Full Paper

IJCST/43/2/C-1643
52 Scalable and Robust Searching of Content Based Image Retrieval System

D.Shiva Shanker, T.Srikanth

Abstract

One key challenge in Content-Based Image Retrieval (CBIR) is to develop a fast solution for indexing high-dimensional image contents, which is crucial to building large-scale CBIR systems. This paper aims to introduce the problems and challenges concerned with the design and the creation of CBIR systems, which is based on a free hand sketch (Sketch based image retrieval – SBIR). With the help of the existing methods, describe a possible solution how to design and implement a task specific descriptor, which can handle the informational gap between a sketch and a colored image, making an opportunity for the efficient search hereby. The used descriptor is constructed after such special sequence of preprocessing steps that the transformed full color image and the sketch can be compared. We have studied EHD, HOG and SIFT. Experimental results on two sample databases showed good results. Overall, the results show that the sketch based system allows users an intuitive access to search-tools.
Full Paper

IJCST/43/2/C-1644
53 User Search Optimazition

Ravindra Reddy Koyya, Phani Ratna Sri Redipalli

Abstract

Gradually people are using the internet for different purpose and providing complex tasks to the web. Basically during travel planning, purchasing through online we give a complex job to web where it issue multiple queries repeatedly over a long period of time. To better support users in this stage on the internet, search engines always keep track of the queries raise by the user while searching online. This paper, analyze and study the problem of organizing the users queries into different groups dynamically. . In our approach, we proposed a method which not only take care of textual similarity or time thresholds, but also leverages search query logs. We perform different experiments and shows that this technique works better than the previous one.
Full Paper

IJCST/43/2/C-1645
54 An Efficient Mechanism for Detecting the Human Interaction in Meetings

Sadhana Sivaji, V Ganesh Dutt, B.Suryanarayana Murthy

Abstract

Data mining is widely applied in the database research area in order to extract frequent correlations of values from both structured and semi structured datasets. In this work we describe an approach to mine Decision Tree learning association rules from how people interact in a meeting. In this paper, we propose a mining method to extract frequent patterns of human interaction based on the captured content of face-to-face meetings. Human interactions, such as proposing an idea, giving comments, and expressing a positive opinion, indicate user intention toward a topic or role in a discussion. Human interaction flow in a discussion session is represented as a tree. Decision Tree interaction mining algorithms are designed to analyze the structures of the trees and to extract interaction flow patterns. The experimental results show that we can successfully extract several interesting patterns that are useful for the interpretation of human behaviour in meeting discussions, such as determining frequent interactions, typical interaction flows, and relationships between different types of interactions.
Full Paper

IJCST/43/2/C-1646
55 A Distributed Algorithm For DOS and CCOS Events in Wireless Sensor Networks

P.Subhakar, R.Phani Rantna Sri

Abstract

Cut is nothing but a part of wireless sensor network which is splited in to different connected component because of some node failuare in the network. This paper, propose a new algorithm to detect these cuts by the help of remaining nodes. This algorithm works in two different ways those are- 1) every node to detect a specially designated node has been lost, and 2) one or more nodes that are connected to the special node after the cut to detect the occurrence of the cut. The algorithm works based on a imaginative “electrical potential” of the nodes which is an iterative method and the convergence rate of the scheme is independent of the network.
Full Paper

IJCST/43/2/C-1647
56 A Novel Approach for Measuring Similarities Between Objects

G.Kiran Varma, K.N.V.Devendra Kumar

Abstract

Clustering is one of the most interesting and important topics in data mining. The aim of clustering is to find intrinsic structures in data, and organize them into meaningful subgroups. The main concept is similarities/dissimilarities measure from multiple viewpoints. The existing algorithms for text mining make use of a single viewpoint for measuring similarity between objects. Their drawback is that the clusters can’t exhibit the complete set of relationships among objects. To overcome this drawback, we propose a new similarity measure known as Hierarchical multiviewpoint based similarity measure to ensure the clusters show all relationships among objects. We also proposed two clustering methods. The empirical study revealed that the hypothesis “multi-viewpoint similarity can bring about more informative relationships among objects and thus more meaningful clusters are formed” is proved to be correct and it can be used in the real time applications where text documents are to be searched or processed frequently.
Full Paper

IJCST/43/2/C-1648
57 A Novel Approach for Secure Multi Cloud System

Venkanna Vegirouthu, K.N.V.Devendra Kumar

Abstract

The end of this decade is marked by a paradigm shift of the industrial information technology towards a subscription based or pay-per-use service business model known as cloud computing. The concept of cloud computing is a very vast concept which is very efficient and effective security services. Cloud data storage redefines the security issues targeted on customer’s outsourced data (data that is not stored/retrieved from the costumers own servers). In this work we observed that, from a customer’s point of view, relying upon a solo SP for his outsourced data is not very promising. In addition, providing better privacy as well as ensuring data availability can be achieved by dividing the user’s data block into data pieces and distributing them among the available SPs in such a way that no less than a threshold number of SPs can take part in successful retrieval of the whole data block. In this paper, we propose a Secured Multi-Cloud Storage (SMCS) model in cloud computing which holds an economical distribution of data among the available SPs in the market, to provide customers with data availability as well as secure storage.
Full Paper

IJCST/43/2/C-1649
58 Efficient Channel Assessment for Wireless Area-Based Disseminate Service

Mastan Vali Patan, Rambabu Pemula

Abstract

In wireless Disseminate Service (MBS), the common channel is used to multicast the MBS content to the Mobile Stations (MSs) on the MBS calls within the coverage area of a Base Station (BS), which causes interference to the dedicated channels serving the traditional calls, and degrades the system capacity. The MBS zone technology is proposed in Mobile Communications Network (MCN) standards to improve system capacity and reduce the handoff delay for the wireless MBS calls. In the MBS zone technology, a group of BSs form an MBS zone, where the macro diversity is applied in the MS, the BSs synchronize to transmit the MBS content on the same common channel, interference caused by the common channel is reduced, and the MBS MSs need not perform handoff while moving between the BSs in the same MBS zone. However, when there is no MBS MS in a BS with the MBS zonetechnology, the transmission on the common channel wastes the bandwidth of the BS. It is an important issue to determine the condition for the MBS Controller (MBSC) to enable the MBS zone technology by considering the QoS for traditional calls and MBS calls. In this paper, we propose two Efficient Channel Assessment schemes: DCA and EDCA by considering the condition for enabling the MBS zone technology. Analysis and simulation experiments are conducted to investigate the performance of DCA and EDCA.
Full Paper

IJCST/43/2/C-1650
59 Information Privacy in Cloud Computing

Syed Shareef, Shaik Abdul

Abstract

Offering strong Information Privacy to cloud users while enabling rich applications is a challenging task. We explore a new cloud platform architecture called Data Protection as a Service, which dramatically reduces the per-application development effort required to offer information privacy, while still allowing rapid development and maintenance.
Full Paper

IJCST/43/2/C-1651
60 Data Dissemination for High Quality Switch Routing

Shaik Gouse Basha, Syed Abdul Haq

Abstract

High-speed routers rely on well-designed packet buffers that support multiple queues, provide large capacity and short response times. Some researchers suggested combined SRAM/ DRAM hierarchical buffer architectures to meet these challenges. However, these architectures suffer from either large SRAM requirement or high time-complexity in the memory management. In this paper, we present scalable, efficient, and novel Data Dissemination architecture. Two fundamental issues need to be addressed to make this architecture feasible: (1) how to minimize the overhead of an individual packet buffer; and (2) how to design scalable packet buffers using independent buffer subsystems. We address these issues by first designing an efficient compact buffer that reduces the SRAM size requirement by (k – 1 )/k. Then, we introduce a feasible way of coordinating multiple subsystems with a load-balancing algorithm that maximizes the overall system performance. Both theoretical analysis and experimental results demonstrate that our load-balancing algorithm and the Data Dissemination architecture can easily scale to meet the buffering needs of high bandwidth links and satisfy the requirements of
scale and support for multiple queues.
Full Paper

IJCST/43/2/C-1652
61 Efficient and Scalable Replication Discovery in XML Data

M.Bhargavi, R.Naveen, Madhira.Srinivas

Abstract

Duplicate detection is the problem of detecting different entries in a data source representing the same real-world entity. While research abounds in the realm of duplicate detection in relational data, there is yet little work for duplicates in other, more complex data models, such as XML. Our research in XML duplicate detection addresses four major challenges. First, we investigate on how object descriptions can be selected automatically, a difficult task in XML where objects and object descriptions are both represented by XML elements. Second, we define new domain-independent duplicate classifiers that take into account not only data, but also structural diversity of XML objects. Third, we define comparison strategies that make use of element dependencies to improve efficiency without jeopardizing effectiveness. Finally, we consider scalability by investigating how relational and XML databases can support the duplicate detection process. By considering the problem of XML duplicate detection under the aspects of effectiveness, efficiency and scalability, we believe that our insights and solutions will significantly contribute to solving XML duplicate detection for a wide range of applications.
Full Paper

IJCST/43/2/C-1653
62 Different Sort of Layouts to Prepare Datasets & Reports in SQL

K.Srinivas Reddy, R.Naveen, Madhira.Srinivas

Abstract

In a data mining project, a significant portion of time is devoted to building a data set suitable for analysis. In a relational database environment, building such data set usually requires joining tables and aggregating columns with SQL queries. Preparing Reports and Dataset are difficult task in data mining. Our proposed horizontal aggregations provide several unique features and advantages. First, they represent a template to generate SQL code from a data mining tool. Such SQL code automates writing SQL queries, optimizing them and testing them for correctness. This SQL code reduces manual work in the data preparation phase in a data mining project. Second, since SQL code is automatically generated it is likely to be more efficient than SQL code written by an end user. For instance, a person who does not know SQL well or someone who is not familiar with the database schema (e.g. a data mining practitioner). Therefore, data sets can be created in less time. Third, the data set can be created entirely inside the DBMS. In modern database environments it is common to export denormalized data sets to be further cleaned and transformed outside a DBMS in external tools (e.g. statistical packages). Unfortunately, exporting large tables outside a DBMS is slow, creates inconsistent copies of the same data and compromises database security. Therefore, we provide a more efficient, better integrated and more secure solution compared to external data mining tools. Horizontal aggregations just require a small syntax extension to aggregate functions called in a SELECT statement. Alternatively, horizontal aggregations can be used to generate SQL code from a data mining tool to build data sets for data mining analysis. We propose three fundamental methods to evaluate horizontal aggregations: CASE: Exploiting the programming CASE construct; SPJ: Based on standard relational algebra operators (SPJ queries); PIVOT: Using the PIVOT operator, which is offered by some DBMSs. Experiments with large tables compare the proposed query evaluation methods. Our CASE method has similar speed to the PIVOT operator and it is much faster than the SPJ method. In general, the CASE and PIVOT methods exhibit linear scalability, whereas the SPJ method does not.
Full Paper

IJCST/43/2/C-1654
63 A Novel Approach for Graphical Password Authentication System

M.Surendhar, M. Srinivas, Madhira.Srinivas

Abstract

Use of text-based passwords is common authentication system. This conventional authentication scheme faces some drawbacks with usability and security issues that bring troubles to users. For example, user tends to pick passwords that can be easily guessed. On the contrary, if a password is hard to guess, then it is often hard to remember. An alternative system is required to overcome these problems. To deal with these drawbacks, authentication scheme that use image as password is proposed. In this paper, we combine the features of these three methods. Our proposed system is mainly the combination of Persuasive Cued Click Points and click-draw based graphical password scheme (CD-GPS). In this, users first choose an ordered sequence of 5 images and then select single image to click-draw their secrets. On remaining 4 images we select click points using features of PCCP (viewport and shuffle button). At the time of login images appear as per the decided sequence. For login user should click on the images for which we used features of PCCP for password creation and user should draw a secret on the previously selected image. By adding feature of secret drawing to PCCP , attackers fails to know that there is use of secret drawing technique on a image in between these images, unfortunately if they knows about secret drawing, they don’t get exact idea that on which image secret has to done .Our proposed system provides higher security than other techniques
Full Paper

IJCST/43/2/C-1655
64 Maintaining Quality and Distribution of Multimedia Data over Wireless Networks

Shaik Khasim, Shaik Abdul

Abstract

For real-life Multimedia Data (video) transmits where many users are involved in the same content, mobile-to-mobile collaboration can be utilized to improve delivery efficiency and decrease infrastructure utilization. Under such collaboration, however, real-life multimedia data transmission requires end-to-end delay bounds. Due to the naturally stochastic nature of wireless fading channels, deterministic delay bounds are prohibitively difficult to guarantee. For a scalable multimedia data (video) structure, an alternative is to provide statistical guarantees using the concept of effective capacity/bandwidth by deriving quality of service exponents for each video layer. Using this concept, we formulate the resource allocation problem for general multi-agent multicast network streams and derive the optimal solution that minimizes the total energy consumption while guaranteeing a statistical endto- end delay bound on each network path. A method is described to compute the optimal resource allocation at each node in a distributed fashion. Furthermore, we propose low complexity approximation algorithms for Capability of flow selection from the set of directed acyclic graphs forming the candidate network flows. The flow selection and resource allocation process is adapted for each video frame according to the channel conditions on the network links. Considering different network topologies, results demonstrate that the proposed resource allocation and flow selection algorithms provide notable performance gains with small optimality gaps at a low computational cost.
Full Paper

IJCST/43/2/C-1656
65 Optimization of Video Data Dissemination Over Wireless Networks

Syed Asif, Syed Abdul Haq

Abstract

For real-time video data broadcast where multiple users are interested in the same content, mobile-to-mobile cooperation can be utilized to improve delivery efficiency and reduce network utilization. Under such cooperation, however, real-time video data transmission requires end-to-end delay bounds. Due to the inherently stochastic nature of wireless fading channels, deterministic delay bounds are prohibitively difficult to guarantee. For a scalable video structure, an alternative is to provide statistical guarantees using the concept of effective capacity/bandwidth by deriving quality of service exponents for each video layer. Using this concept, we formulate the resource allocation problem for general multihop multicast network flows and derive the optimal solution that minimizes the total energy consumption while guaranteeing a statistical end-to-end delay bound on each network path. A method is described to compute the optimal resource allocation at each node in a distributed fashion. Furthermore, we propose low complexity approximation algorithms for Optimization flow selection from the set of directed acyclic graphs forming the candidate network flows. The flow selection and resource allocation process is adapted for each video frame according to the channel conditions on the network links. Considering different network topologies, results demonstrate that the proposed resource allocation and flow selection algorithms provide notable performance gains with small optimality gaps at a low computational cost.
Full Paper

IJCST/43/2/C-1657
66 Data Replica Allocation to Overcome Selfish Nodes in Mobile Ad-Hoc Networks

D.Divya Priya, Dr.P.Harini

Abstract

Mobile Ad-Hoc Networks (MANETS) have attracted a lot of attention dye to the popularity of mobile devices and the advances in wireless communication technologies.Data accessibility is often an important metric in a MANET.The mobility and resource constraints of mobile nodes may lead to network partitioning or performance degradation.Several data replication techniques have been proposed to minimize performance degradation. Most of them assume that all mobile nodes collaborate fully in terms of sharing their memory space. In reality, however, some nodes may selfishly decide only to cooperate partially, or not at all, with other nodes. These selfish nodes could then reduce the overall data accessibility in the network. Here, we examine the impact of selfish nodes in a mobile adhoc network from the perspective of replica allocation. We term this selfish replica allocation. In particular, we develop a selfish node detection algorithm that considers partial selfishness and novel replica allocation techniques to properly cope with selfish replica allocation. The conducted simulations demonstrate the proposed approach outperforms traditional cooperative replica allocation techniques in terms of data accessibility, communication cost, and average query delay.
Full Paper

IJCST/43/2/C-1658
67 An Efficient and Effective Mechanism for the Social-Oriented Structure of User Created Multimedia Content

P. Narmada, B.Nagendra Reddy

Abstract

Tagging in online social networks is very popular these days, as it facilitates search and retrieval of diverse resources available online. However, noisy and spam annotations often make it difficult to perform an efficient search. Users may make mistakes in tagging and irrelevant tags and resources may be maliciously added for advertisement or self-promotion. Since filtering spam annotations and spammers is time-consuming if it is done manually, machine learning approaches can be employed to facilitate this process. Community members are constantly accumulated in content sharing enablers. This position paper presents research work towards the exploitation of social software applications for understanding, semantically enriching and delivering multimedia content, our ultimate goal being to support online communities. Thus, we propose a system being able to search within multimedia data that can further be extended to search within any kind of (partial) document to achieve a more tightly focused and personalized search. In this paper a tag-based profiling approach that exploits social tags for identifying relevant resources from folksonomies according to the interests of individual users was presented. One-class classifiers were used to learn the user interests from resources in the user personomy and the tags collectively assigned to them.
Full Paper

IJCST/43/2/C-1659
68 Multiparty Access Control for Online Social Networks Model and Mechanisms

Voruganti Santhosh, Guntapally Minni

Abstract

The most visited portals by Internet Users are none other than Online Social Network [OSN] Portals .Millions of Users interaction is going here, so these portals plays major role in interaction of people. OSN offered Digital Interaction and information sharing along with this it raises many security issues and privacy issues also. There is an issue with restriction to access shared data, at present there is no mechanism to enforce privacy concerns over data associated with multiple users. This paper proposed a methodology to enable the protection of shared data associated with multiple users in OSNs. We come up with an access control model to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme and a policy enforcement mechanism. Besides, we present a logical representation which allows us to influence the structure of existing logic solvers to perform various analysis tasks on our model. We also discuss a proof-of-concept prototype of our approach as part of an application in Face book and provide usability study and system evaluation of our approach.
Full Paper

IJCST/43/2/C-1660
69 Towards Data Integrity in Clouds Through Outsourcing Audit Service

B.Sandeep Reddy, K.Praveen, Dr. R.V.Krishnaiah

Abstract

Cloud computing has enabled clients to outsource their data to cloud and reap advantages such as low-cost, location independence, and scalability. After outsourcing data the clients may not be able to hold up to date data in the local storage. It does mean that the clients have to trust the cloud service provider for the security of their data. In order to know that the data is not tampered with by third parties, auditing services can help. These services can ensure the availability of data. One of the cryptographic techniques is known as Provable Data Possession (PDP) which is used to verify the integrity of outsourced data in cloud server. Zhu et al. presented a protocol which is interactive in nature for achieving zero knowledge proof system. The effective mechanism proposed by them for efficient probabilistic query processing and periodic verification could reduce the cost of verification. In this paper we implement the PDP protocol by building a prototype application that will ensure efficient audit services.
Full Paper

IJCST/43/2/C-1661
70 Improved Reversible Watermarking Method using Enhancement and Clustering

B. Nagendra, M. Anitha, I.Suneetha

Abstract

Reversible Watermarking methods are common now days in multimedia for copyright protection, which gives robustness by preserving no damage to cover image. But there are problems in reversibility, robustness and invisibility with the usual methods. To overcome these problems, this paper proposes an improved reversible watermarking using wavelet domain statistical quantity histogram enhancement and clustering (WDSQH-EC) method. Proposed method builds new embedding and extraction process and also security provision is done with a key generation. The results are satisfactory in terms of invisibility, robustness and reversibility. In embedding the watermark the enhancement on pixel-wise is done to increase practical applicability and also trade-off between robustness and invisibility. This method shows effectiveness when enhancement has been done. It has to provide more strong when more compressions and unintentional attacks had taken.
Full Paper

IJCST/43/2/C-1662
71 An Effective Method for Detection of Moving Objects From Video

M. Nagaraju, M. Bala Tripura Sundary, B. Rakeshchandra

Abstract

Moving object detection is a low-level, important task for any visual surveillance system. One of the aim of this paper is to describe various approaches of moving object detection such as background subtraction, temporal difference, as well as pros and cons of these techniques. A statistical mean technique [10] has been used to overcome the problem in previous techniques. Even statistical mean method also suffers with the problem of superfluous effects of foreground objects. In this paper, our new method tries to overcome superfluous effect as well as reduces the computational complexity up to some extent. We developed a robust algorithm for automatic, noise detection and removal from moving objects in video sequences is presented. This algorithm considers static camera parameters.
Full Paper

IJCST/43/2/C-1663