IJCST Logo

 

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY (IJCST)-VOL IV ISSUE IV, VER. 1, OCT. TO DEC, 2013


International Journal of Computer Science and Technology Vol. 4 Issue 4, Ver. 1
  S.No. Research Topic Paper ID
    1 Dynamic Topology Control in Mobile Ad-Hoc Networks
Ajay Bhaskar Dasari, G.Krishna Kishore

Abstract

In wireless networks so many algorithms are proposed to reduce the level of interference between nodes in the networks and increasing network capacity, in this paper we proposed Critical Neighbor Algorithm to reduce level of interference by Calculating the Energy of the each node in the network. Mobile Ad-Hoc Networks (MANET) recommend multi-hop connectivity between self-configuring and self-managing mobile hosts. These Networks are characterized by dynamic network topology. Topology control will increase the performance of the network. By using Critical Neighbor algorithm the topology can be controlled in wireless networks.
Full Paper

IJCST/44/1/D-1732
    2 A Novel Approach for Preventing Jamming Attacks in Wireless Sensor Networks
CH Hima Bindu, G Surekha

Abstract

Understanding and defending against jamming attacks has long been a problem of interest in wireless communication and radar systems. In wireless ad hoc and sensor networks using multihop communication, the effects of jamming at the physical layer resonate into the higher layer protocols, for example by increasing collisions and contention at the MAC layer, interfering with route discovery at the network layer, increasing latency and impacting rate control at the transport layer, and halting or freezing at the application layer. Adversaries that are aware of higher-layer functionality can leverage any available information to improve the impact or reduce the resource requirement for attack success. For example, jammers can synchronize their attacks with MAC protocol steps, focus attacks in specific geographic locations, or target packets from specific applications. In this work, we address the problem of selective jamming attacks in wireless networks. In these attacks, the adversary is active only for a short period of time, selectively targeting messages of high importance. We illustrate the advantages of selective jamming in terms of network performance degradation and adversary effort by presenting two case studies; a selective attack on TCP and one on routing. We show that selective jamming attacks can be launched by performing real-time packet classification at the physical layer. To mitigate these attacks, we develop three schemes that prevent real-time packet classification by combining cryptographic primitives with physical-layer attributes. We analyse the security of our methods By Adding Public Key Encryption algorithms (e.g., RSA) and evaluate their computational and communication overhead.
Full Paper

IJCST/44/1/D-1733
    3 Personalized Image Search from Image Sharing Sites: Ontological Search and Hybrid Ranking Algorithm
K. Rambabu, T Sudha Rani

Abstract

Now a day’s use of internet is increased with a big extent due to this large data is uploaded and downloaded by various internet users and a quite large-scale of data is processed. Users can create, share and comment Medias using social sharing websites like Flickr and YouTube so once we have such a huge data to handle on the internet we cannot find the right information which we are searching. The data generated by user called metadata provides multiple things like sharing, organizing multimedia contents and provides useful information related to multimedia management. Due to this reason I am proposing a “Personalized image search Engine by using ontological search and hybrid ranking Algorithms”. By using this algorithm I can make our search personalized with a specific limit. This system is advantageous over Google because Google gives non-personalized information, which is not as per user desirable requirements. The basic premise of our system is to combine user preferences and query related search into user specific topics. This proposed system consists of two parts, first part contains hybrid re-ranking algorithm based on ontology about images as well as text and second part contains ontological user profiles for personalized image search.
Full Paper

IJCST/44/1/D-1734
    4 Exploring Frequent Patterns of Human Interaction Using Tree Based Mining
K.Surya Ram Prasad, K.Devi Priya

Abstract

Data mining is widely applied in the database research area in order to extract frequent correlations of values from both structured and semi structured datasets. In this work we describe an approach to mine Decision Tree learning association rules from how people interact in a meeting. In this paper, we propose a mining method to extract frequent patterns of human interaction based on the captured content of face-to-face meetings. Human interactions, such as proposing an idea, giving comments, and expressing a positive opinion, indicate user intention toward a topic or role in a discussion. Human interaction flow in a discussion session is represented as a tree. Decision Tree interaction mining algorithms are designed to analyze the structures of the trees and to extract interaction flow patterns. The experimental results show that we can successfully extract several interesting patterns that are useful for the interpretation of human behaviour in meeting discussions, such as determining frequent interactions, typical interaction flows, and relationships between different types of interactions.
Full Paper

IJCST/44/1/D-1735
    5 Effective Clustering With Multiviewpoint Based Similarity Measure
Manjeet Singh, M. Rambhupal

Abstract

The aim of clustering is to find intrinsic structures in data, and organize them into meaningful subgroups. The main concept is similarity/dissimilarity measure from multiple viewpoints. The existing algorithms for text mining make use of a single viewpoint for measuring similarity between objects. Their drawback is that the clusters do not exhibit the complete set of relationships among objects. To overcome this drawback, we propose a new similarity measure known as Hierarchical multi-viewpoint based similarity measure to ensure that the clusters show all relationships among objects. We also proposed two clustering methods. The empirical study revealed that the hypothesis “multi-viewpoint similarity” can bring about more informative relationships among objects and thus more meaningful clusters are formed is proved to be correct and it can be used in the real time applications where text documents are to be searched or processed frequently.
Full Paper

IJCST/44/1/D-1736
    6 Optimistic Query Refresh
Srikanth Pala, T Sudha Rani

Abstract

Processing of continuous queries that are getting updated dynamically and satisfying the client’s requirement with the least number of refresh messages has always been a concern. To process such queries, Multi-queries was used to monitor changes in dynamic data from multiple data sources. Clients desire to obtain the aggregated values from multiple data sources. For example find the average temperature of a particular location. We present a scalable technique to answer clients multiple queries using pull based mechanism. Decomposing multi-query to independent subqueries and those sub-queries are executed using parallelism technique. Dynamically allocate sub-query to the data aggregators based on the dimensions of data item in query. We investigated various techniques for efficiently executing multi-query workloads from data and to provide accurate results to the user immediately without time delay. Performance results using real world traces shows that the time delay is reduced when compared to executing sub-queries in constant aggregators.
Full Paper

IJCST/44/1/D-1737
    7 Controlling Wi-Fi Test Setup Through Automation
L.N.S.Aditya Kompella, D.Srinivas

Abstract

The main Objective of this project is to perform Wi-Fi testing on Device under Test (DUT) through Automation scripts. The Wireless Ethernet Compatibility Alliance (WECA) is the industry organization that certifies 802.11 products that are deemed to meet a base standard of interoperability. This paper is targeted at individuals who are already familiar with traditional wired networks and who wish to become familiar with wireless networks and some of the issues concerning their development. Once confined only to specialty functioning wireless networks are now beginning to proliferate in the form of IEEE 802.11b ,802.11a, 802.11g,802.11n (Wi-Fi™) wireless LANs, allowing users to roam throughout a building with a laptop while still remaining connected to their network. Wi-Fi™ networks are described, as well as how the 802.11 standard is evolving to supply higher data rates and better quality of service.
Full Paper

IJCST/44/1/D-1738
    8 Dynamic Detection of Susceptible Dynamic Component Loadings
G.Syamala, AVS.Sudhakar Rao

Abstract

Dynamic loading of software components (e.g., libraries or modules) is a widely used mechanism for an improved system modularity and flexibility. Resolving Correct component is critical for reliable and secure software execution. However, programming mistakes may lead to unintended or even malicious components being resolved and loaded. In particular, dynamic loading can be hijacked by placing an arbitrary file with the specified name in a directory searched before resolving the target component. Although this issue has been known for quite some time, it was not considered serious because exploiting it requires access to the local file system on the vulnerable host. Recently, such vulnerabilities have started to receive considerable attention as their remote exploitation became realistic. It is now important to detect and fix these vulnerabilities. In this paper, we present the first automated technique to detect vulnerable and unsafe dynamic component loadings. Our analysis has two phases: (1) collecting runtime information on component loading by using dynamic binary instrumentation (online phase), and (2) analyze the collected information to detect vulnerable component loadings (offline phase). For evaluation, we implemented our technique to detect vulnerable and unsafe component loadings in popular software on Microsoft Windows and Linux. Our evaluation results show that unsafe component loading is prevalent in software on both OS platforms, and it is more severe on Microsoft Windows. In particular, our tool detected more than 4,000 unsafe component loadings in our evaluation, and some can lead to remote code execution on Microsoft Windows.
Full Paper

IJCST/44/1/D-1739
    9 Simulation of POR in Unfailing Data Delivery for Highly Self: Motivated MANETs
G.Venkateswarlu, D.Srinivas, K.Ravikumar

Abstract

This paper deals with the problem of carrying data packets for highly self – motivated MANETs in a unfailing and timely approach. Most existing ad hoc routing protocols are vulnerable to node mobility, mainly for large-scale networks. Driven by this issue, we propose a proficient Position-based Opportunistic Routing (POR) protocol which takes benefit of the stateless property of geographic routing and the broadcast nature of wireless medium. When a data packet is sent out, some of the neighbor nodes that have overheard the transmission will serve as forwarding candidates, and take turn to forward the packet if it is not transmitted by the specific best forwarder within a certain period of time. By using such in-the-air backup, communication is maintained without being interrupted. The extra latency incurred by local route recovery is greatly reduced and the duplicate transmission caused by packet reroute is also decreased. In the case of communication gap, a Virtual Destination-based Void Handling (VDVH) scheme is additional proposed to work together with POR. Both theoretical analysis and simulation results show that POR achieves excellent performance even under high node mobility with suitable overhead and the new void managing scheme also works well.
Full Paper

IJCST/44/1/D-1740
    10 An Efficiently Detecting Most Frequent Patterns in People Interaction in Meeting
K.S N Sushma, T. Raghunatha Reddy

Abstract

Human interaction is one of the most important characteristics of group social dynamics in meetings. Mining Human Interaction in Meetings is useful to identify how a person reacts in different situations. Behavior represents the nature of the person and mining helps to analyze, how the person exhibits his/her opinion. Detecting semantic knowledge is significant. Meeting interactions are categorized as propose, comment, acknowledgement, requestinformation, ask-opinion, post-opinion and negative opinion. Semantic knowledge of meetings can be revealed by discovering interaction patterns from these meetings. An existing method mines interaction patterns from meetings using tree structures. However, such a tree-based method may not capture all kinds of triggering relations between interactions, and it may not distinguish a participant of a certain rank from another participant of a different rank in a meeting. Hence, the tree-based method may not be able to find all interaction patterns such as those about correlated interaction. In this paper, we propose to mine interaction patterns from meetings using an alternative data structure—namely, a Directed Acyclic Graph (DAG). Specifically, a DAG captures both temporal and triggering relations between interactions in meetings. Moreover, to distinguish one participant of a certain rank from another, we assign weights to nodes in the DAG. As such, a meeting can be modeled as a weighted DAG, from which weighted frequent interaction patternscan be discovered. Experimental results showed the effectivenessof our proposed DAG-based method for mining interaction patterns from meetings.
Full Paper

IJCST/44/1/D-1741
    11 Efficiently Identifying and Resolving Firewall Conflicting Policies
A.Mohan Balaji, D.Gopi Charan Tej

Abstract

Firewalls are core elements in network security. However, managing firewall rules, particularly, in multi firewall enterprise networks, has become a complex and error-prone task. Firewall filtering rules have to be written, ordered, and distributed carefully in order to avoid firewall policy anomalies that might cause network vulnerability. Therefore, inserting or modifying filtering rules in any firewall requires thorough intrafirewall and interfirewall analysis to determine the proper rule placement and ordering in the firewalls. Firewalls are a widely deployed security mechanism to ensure the security of private networks in most businesses and institutions. The effectiveness of security protection provided by a firewall mainly depends on the quality of policy configured in the firewall. However, designing and managing firewall policies are often error-prone due to the complex nature of firewall configurations as well as the lack of systematic analysis mechanisms and tools. This paper represents an innovative anomaly management framework for firewalls, adopting a rulebased segmentation technique to identify policy anomalies and derive effective anomaly resolutions. Policy is presented in this paper provides visual views on firewall policies and rules which gives users a powerful means for inspecting firewall policies
Full Paper

IJCST/44/1/D-1742
    12 Cancer Classification Using Transductive Extreme Learning Machine
Dr. K.Ananda Kumar, S.Senthil Kumar

Abstract

Cancer classification using gene expression data stands out from the other previous classification data due to its unique nature and application domain. Hope to gain some insight into the problem of cancer classification in aid of further developing more effective and efficient classification algorithms. With the development of Microarray technology, number of cancer can be identified. In Previous numbers of techniques are used, but they didn’t show the better accuracy.A novel approach to combine feature (gene) selection and Transductive Extreme Learning Machine (TELM) is proposed.The selected genes of the microarray data were then exploited to design the TELM. In this paper two types of future selection are taken for cancer classification they are (CBFS) Consistency-Based Feature Selection and Signal- Noise Ratio (SNR) with the help of Leukemia and Lymphoma Data Set.Experimental resultsconfirm the effectiveness of the proposed technique compared tothe TSVM (Transductive Extreme Learning Machine) cancer classification as well as gene-marker identification
Full Paper

IJCST/44/1/D-1743
    13 Scalable and Reliable Learning of the Similar Behaviours in Social Media Network
T.Srikanth, A.K.Mohana Rao

Abstract

Now a days the public networks are playing vital role in world. The public networks have various elements the people were using that and gain the content as well as services. By given information about some individuals, how can we infer the behavior of unobserved individuals in the same network. Information of some actors in the network. Many public media tasks can be connected to the problem of cooperative behavior forecast. Since connections in a public network represent various kinds of relations, a public-learning framework based on public elements is introduced. With sparse public elements, the proposed approach can efficiently handle networks of millions of actors while demonstrating a comparable forecast performance to other non-scalable methods. This study of cooperative behaviour is to understand how individuals behave in a public networking environment Online public networks play an important role in everyday life for many people. Public media has reshaped the way in which people interact with each other. A public-element-based approach has been shown effective in addressing the heterogeneity of connections presented in public media. However, the networks in public media are normally of colossal size, involving hundreds of thousands of actors.
Full Paper

IJCST/44/1/D-1744
    14 RMTF: Personalized Image Search from Photo Sharing Websites
Bestha Lava Kumar, GVNKV Subba Rao

Abstract

Social sharing websites like Flickr and YouTube allow users to create, share, tag, annotate, and comment medias. The large amount of user-generated metadata facilitate users during sharing and organizing multimedia content and provide useful information to improve media retrieval and management. The web search experience is improved by generating the returned list according to the modified user search intents using personalized search. In this paper, we propose a model simultaneously considering the user and query relevance to learn to personalized image search. In this basic work is to embed the user preference and query-related search intent into user-specific topic spaces.
Full Paper

IJCST/44/1/D-1745
    15 MPAC: An Efficient Model and Mechanisms for OSN
Voruganti Rajesh, GVNKV SUBBA RAO

Abstract

Online Social Networks (OSNs) have experienced tremendous growth in recent years and become a de facto portal for hundreds of millions of Internet users. These OSNs offer attractive means for digital social interactions and information sharing, but also raise a number of security and privacy issues. While OSNs allow users to restrict access to shared data, they currently do not provide any mechanism to enforce privacy concerns over data associated with multiple users. To this end, we propose an approach to enable the protection of shared data associated with multiple users in OSNs. We formulate an access control model to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme and a policy enforcement mechanism. Besides, we present a logical representation of our access control model which allows us to leverage the features of existing logic solvers to perform various analysis tasks on our model.
Full Paper

IJCST/44/1/D-1746
    16 Privacy-Preserving Data Aggregation in Wireless Sensor Networks
Y.Kethan Harish, Suneel Kumar Badugu, A. N. V. V.Bhadram

Abstract

The security issues such as confidentiality, data integrity,and freshness in data aggregation become crucial when these WSN is deployed in a remote or hostile environment where sensors are prone to node failures and compromises. There is a currently research potential in securing data aggregation in WSNs. With in this mind, the security issues in data aggregation for these WSN will be discussed in this present system. Recent advances in Wireless Sensor Networks (WSNs) have led to several new promising applications including habitat monitoring and target tracking. However, the data communication between nodes are consuming a large portion of the entire energy consumption of the WSNs. Consequently, data aggregation techniques can significantly help to reduce the energy consumption by eliminating the redundant data travelling back to the base station. Then, the adversarial model that can exist in any aggregation protocol will be explained. After that, the “state-of-the-art” in secure data aggregation schemes are surveyed and then classified into two categories based on this number of aggregator nodes and the existence of the verification phase.Finally, a conceptual framework will be proposed to provide new designs with the minimum security requirements against a certain type of adversary. This framework provides a better understanding of these schemes and facilitates the evaluation process.
Full Paper

IJCST/44/1/D-1747
    17 Optimizing Query With Incremental Indexing to Prepare Datasets from Database
N. Muralikrishna, K.R.K.Satheesh

Abstract

Web search engines have popularized the keyword- based mostly search paradigm. Whereas early management systems offer strong query languages, they may be doing not enable keyword based search. During this document, we are inclined to talk about HierarchicalIndexer, a system that enables keyword – based mostly research in directories. HierarchicalIndexer is enforced utilizing a commercial on line database and internet host and enables users to work using a browser front-end. We tend to determine the difficulties and discuss the execution of our system as well as outcomes of intensive experimental analysis.
Full Paper

IJCST/44/1/D-1748
    18 Efficiently Controlling the Topology and Communicate with the Mobile Ad-Hoc Networks
Soundarrajan D S, G.Padmarao

Abstract

Joint Communication (JC) allows save power and extend transmission coverage. However, prior research work on topology control considers JC only in the aspect of energy saving, not that of coverage extension. We identify the challenges in the development of a centralized topology control scheme, named Joint Bridges, which reduces transmission power of nodes as well as increases system connectivity. Multiple nodes to simultaneously transmit the same packet to the receiver so that the combined signal at the receiver can be correctly decoded. Since JC can reduce the transmission power and extend the transmission coverage, it has been considered in topology control protocols. However, prior research on topology control with JC only focuses on maintaining the system Connectivity, minimizing the transmission power of each node, whereas ignores the energy-efficiency of paths in constructed topologies. This may cause inefficient routes and hurt the overall system performance. In this paper, to address this problem, we introduce a new topology control problem: energyefficient topology control problem with Joint communication, and propose two topology control algorithms to build Joint energy spanners in which the energy efficiency of individual paths are guaranteed. Simulation results confirm the nice performance of the proposed algorithms.
Full Paper

IJCST/44/1/D-1749
    19 Design of Solid Security Routing Protocol for MANET in Military Application
M.Kavitha, S.Surekha

Abstract

During this project, we have a tendency to outline solid secrecy necessities relating to secrecy-maintain routing in Edouard Manet. We have a tendency to propose associate imperceptible secure routing theme USOR to supply complete unlink ability and content un-observability. USOR is economical because it uses a completely unique combination of cluster signature and ID-based encoding for route finding. Security analysis demonstrates that USOR will well defend user privacy against each within and out of doors attackers.
Full Paper

IJCST/44/1/D-1750
    20 Efficient and Scalable Distributed Packet Buffers Architecture and Load-Balancing Algorithm for High Bandwidth Links
K. Rama Krishna, M. Srinivas, Madhira.Srinivas

Abstract

Trusting on the well-designed packet buffers in the high-speed routers in the support of the multiple queues that provide large capacity and the short response time. In some researches, it is required to suggest the SRAM/DRAM hierarchical buffer architectures. The architectures suffer from either large SRAM requirement or high time-complexity in the memory management. In our research, we introduce the efficient, novel, and scalable distributed packet buffer architecture. To make the architecture feasible we have two fundamental issues that need to be addressed: a) How to minimize the overhead of an individual packet buffer b) how to design scalable packet buffers using independent buffer subsystems. We direct these problems by initial coming up with an efficient compact buffer that reduces the SRAM size demand by (k-1)/k. After, we present a feasible way of coordinating multiple subsystems with a load-balancing algorithm that maximizes the overall system performance. Our load-balancing algorithm and the distributed packet buffer architecture can easily scale to meet the buffering needs of high bandwidth links and satisfy the requirements of scale and support for multiple queues.
Full Paper

IJCST/44/1/D-1751
    21 Optimal Overlay Planning for Continuous Data Aggregation Queries
Divya.Nakkala, T.Sunitha

Abstract

In many applications, including location based services, queries may not be precise. In this paper, we study the problem of efficiently computing range aggregates in a multidimensional space when the query location is uncertain. Typically a user desires to obtain the value of some aggregation function over distributed data items In such a network of data aggregators, each data aggregator serves a set of data items at specific coherencies. Just as various fragments of a dynamic webpage are served by one or more nodes of a content distribution network, our technique involves decomposing a client query into sub-queries and executing sub-queries on judiciously chosen data aggregators with their individual subquery incoherency bounds. We provide a technique for getting the optimal set of sub-queries with their incoherency bounds which satisfies client query’s coherency. We present a low-cost, scalable technique to answer continuous aggregation queries using a network of aggregators of dynamic data items. In such a network of data aggregators, each data aggregator serves a set of data items at specific coherencies. We provide a technique for getting the optimal set of sub-queries with their incoherency bounds which satisfies client query’s coherency requirement with least number of refresh messages sent from aggregators to the client. For estimating the number of refresh messages, we build a query cost model which can be used to estimate the number of messages required to satisfy the client specified incoherency bound. Performance results using real-world traces show that our cost-based query planning leads to queries being executed using less than one third the number of messages required by existing schemes.
Full Paper

IJCST/44/1/D-1752
    22 A Feedback based Dynamic Load Balancing Approach for Distributed Computing Network
Ayush Singhal, Abhinav Goel, Niraj Singhal

Abstract

The demand for high performance computing continues to increase every day. With the availability of high speed networks, a large number of geographically distributed computational elements (CEs) (or computational systems (CSs)) can be interconnected and effectively utilized in order to achieve performance that may not be ordinarily attainable on a single CE or single CS. The distributed computing environment of this type demands for consideration of heterogeneities in computational and communication resources. In a distributed environment, an incoming workload has to be efficiently allocated to these CEs or CSs so that no single CE or CS is overburdened, while other remains idle. Further, tasks migration from high (overloaded) to low (moderately overloaded) area in a network may alleviate to some extent the network traffic congestion problem. Workstation or System clusters are being recognized as the most promising computing resource of the near future. A large-size cluster, consisting of locally connected workstations, has power comparable to a supercomputer, at a fraction of the cost. Distributing the total computational load across available processors is referred to as load-balancing. Keywords:- Load Balancing, Load Balancing Techniques, Centralized Server, File Server, Service Status Count.
Full Paper

IJCST/44/1/D-1753
    23 Review of Agile Software Development Methodologies
Ankur Kumar Goel, Manika Goel

Abstract

Looking at software engineering designs & principals, we can check how software development methods evolved over past 50 years. Agile is recently introduced method which impresses the software companies & IT professional due to Cost effectiveness & quality of software developed using agile methods. Due to current economic slowdown many organization strive to continue the trends towards adopting agile processes in order to achieve benefits it can offers. These benefits include quick return of investment, good quality of software, & higher customer satisfaction. Since there is drought of resources which can help organizations & IT professionals to integrate agile methods in their day to day life of software development, this paper will review various agile methodologies & present the main studies on implementation of Agile in geographically located teams.
Full Paper

IJCST/44/1/D-1754
    24 Simplistic Constructive Routing Protocol to Accomplish Best Throughput and Fairness
Kothuri Parashu Ramulu, Pravin Bhagwatrao Wattamwar

Abstract

Wireless Mesh Network (WMN) is the emerging technology in current networks. Predefined path is being used for routing, in the current routing protocol which is vulnerable to be attacked and doesn’t provide the security. The WMNs are desirable for communication paradigm because WMNs achieve the routing in low cost and easy deploying. This paper proposes a routing protocol called Simplistic Constructive Routing Protocol (SPRP) which supports multiple flows in wireless mesh networks. There are four major components in the SPRP which provides fairness and high throughput: (1) Constructive forwarding path selection to provide multiple paths while minimizing duplicate transmissions, (2) Constructive rate control to determine an appropriate transfer rate according to the current network conditions. (3) Inter node loss recovery to efficiently find and resend lost packets, and (4) clock-based forwarding to let only the priority node to forward the traffic. Our result shows that SPRP significantly performs existing routing and a pivotal time serving routing protocol.
Full Paper

IJCST/44/1/D-1755
    25 Online Modeling of Proactive Moderation System for Auction Fraud Detection
P.P. DIVYA KUMAR REDDY, GVNKV SUBBA RAO

Abstract

We consider the problem of building online machine-learned models for detecting auction frauds in e-commence web sites. Since the emergence of the world wide web, online shopping and online auction have gained more and more popularity. While people are enjoying the benefits from online trading, criminals are also taking advantages to conduct fraudulent activities against honest parties to obtain illegal profit. Hence proactive frauddetection moderation systems are commonly applied in practice to detect and prevent such illegal and fraud activities. Machinelearned models, especially those that are learned online, are able to catch frauds more efficiently and quickly than human-tuned rule-based systems. In this paper, we propose an online probit model framework which takes online feature selection, coefficient bounds from human knowledge and multiple instance learning into account simultaneously. By empirical experiments on a real-world online auction fraud detection data we show that this model can potentially detect more frauds and significantly reduce customer complaints compared to several baseline models and the humantuned rule-based system.
Full Paper

IJCST/44/1/D-1756
    26 Confucius: A Tool Supporting Collaborative Scientific Workflow Composition
BADIGA NEERAJA, GVNKV SUBBA RAO

Abstract

Collaboration has become a dominant feature of modern science. Many scientific problems are beyond the realm of individual discipline or scientist to solve [1] and hence require collaborative efforts. Modern scientific data management and analysis usually rely on multiple scientists with diverse expertise. In recent years, such a collaborative effort is often structured and automated by a dataflow -oriented process called scientific workflow. Such workflows may have to be designed and revised among multiple scientists over a long time period. Existing tools are single user-oriented and do not support workflow development in a collaborative fashion. Based on scientific collaboration ontology, we propose a service-oriented collaboration model supported by a set of compos able collaboration primitives and patterns. The collaboration protocols are then applied to support effective concurrency control in the process of collaborative workflow composition. The Design and development of Confucius a serviceoriented collaborative scientific workflow composition tool that extends an open-source, single-user environment
Full Paper

IJCST/44/1/D-1757
    27 Application of Characteristics Based Incremental Clustering in Mining of Server Log Messages
Garima Chouksey, Prateek Gupta

Abstract

Information Extraction (IE) is the task of automatically extracting structured information from unstructured and/or semi-structured machine-readable documents. In most of the cases this activity concerns processing human language texts by means of natural Language Processing (NLP). Recent activities in multimedia document processing like automatic annotation and content extraction out of images/audio/video could be seen as information extraction. It has been found that application of incremental clustering has not been done very much in message extraction in the field of processing of event logs of any machine. Since event logs are the data sets which grow by the time and require that the clusters must be updated in real time with the growth of the event log. Since event logs are the huge set of datasets and grow rapidly as the system processing goes on, therefore, for extraction of data from it is variable and should grow by the time. In this work, it is being proposed to apply the incremental clustering to extract the data from the event log as per the characteristics provided by the users of the system. Incremental Clustering requires initial clusters to be decided in advance i.e. they must pre exist for processing. If the initial clusters are to be fixed, then there are several ways it can be achieved. The algorithm being proposed is a dynamic and novice algorithm for deciding the initial clusters dynamically.
Full Paper

IJCST/44/1/D-1758
    28 Handwritten Chinese Text Recognition by Integrating Multiple Contexts
M. Sruthi, GVNKV SUBBA RAO

Abstract

This paper presents an effective approach for the offline recognition of unconstrained handwritten Chinese texts. Under the general integrated segmentation-and-recognition framework with character over segmentation, we investigate three important issues: candidate path evaluation, path search, and parameter estimation. For path evaluation, we combine multiple contexts (character recognition scores, geometric and linguistic contexts) from the Bayesian decision view, and convert the classifier outputs to posterior probabilities via confidence transformation. In path search, we use a refined beam search algorithm to improve the search efficiency and, meanwhile, use a candidate character augmentation strategy to improve the recognition accuracy. The combining weights of the path evaluation function are optimized by supervised learning using a Maximum Character Accuracy criterion. We evaluated the recognition performance on a Chinese handwriting database CASIA-HWDB, which contains nearly four million character samples of 7,356 classes and 5,091 pages of unconstrained handwritten texts. The experimental results show that confidence transformation and combining multiple contexts improve the text line recognition performance significantly. On a test set of 1,015 handwritten pages, the proposed approach achieved character-level accurate rate of 90.75 percent and correct rate of 91.39 percent, which are superior by far to the best results reported in the literature.
Full Paper

IJCST/44/1/D-1759
    29 TLWL: Fine Grained Load Balancing for SIP Servers
V. Sangeeta, K. L. Neelima

Abstract

Load-balancing algorithms are introduced for distributing Session Initiation Protocol (SIP) which requests to a cluster of SIP servers for different purpose. Here the algorithm used are enhancing both in throughput and response time to different user. This paper build a prototype of our system using Linux operating system. We have used Transaction Least-Work-Left (TLWL) algorithm has got a leading role in and recognizing variability in different parameter in processing costs for different SIP transactions. By combining different features of different algorithm, and provides a good response-time improvements as well we present a detailed analysis to show how our algorithms significantly reduce response time.
Full Paper

IJCST/44/1/D-1760
    30 A Test Case Minimization in Component Based Software Models Review
Arvind Kumar

Abstract

Test case minimization in Component based Software can be very effective in detecting defects in the whole system. In practice, however, failing test cases often comprise long sequences of method calls that are tiresome to reproduce and debug. This paper is about component-based software integration testing and their challenges that developers encounter during the development and testing phase of Component based systems.
Full Paper

IJCST/44/1/D-1761
    31 Reduce the Loss of Data Transfer in Networks Using STLCC
P.Lakshmi Prasad, SK Shafiulilah

Abstract

Presently the Internet accommodates simultaneous audio, video, and data traffic. This requires the Internet to guarantee the packet loss which at its turn depends very much on congestion control. A series of protocols have been introduced to supplement the insufficient TCP mechanism controlling the network congestion. CSFQ was designed as an open-loop controller to provide the fair best effort service for supervising the per-flow bandwidth consumption and has become helpless when the P2P flows started to dominate the traffic of the Internet. Token-Based Congestion Control (TBCC) is based on a closed-loop congestion control principle, which restricts token resources consumed by an end-user and provides the fair best effort service with O (1) complexity. As Self-Verifying CSFQ and Re-feedback, it experiences a heavy load by policing inter-domain traffic for lack of trust. In this paper, Stable Token-Limited Congestion Control (STLCC) is introduced as new protocols which appends inter-domain congestion control to TBCC and make the congestion control system to be stable. STLCC is able to shape output and input traffic at the inter-domain link with O(1) complexity. STLCC produces a congestion index, pushes the packet loss to the network edge and improves the network performance. Finally, the simple version of STLCC is introduced. This version is deployable in the Internet without any IP protocols modifications and Preserves also the packet datagram.
Full Paper

IJCST/44/1/D-1762
    32 Scalable and Secure Sharing of Client Medical Records in Cloud
DivyaBharathi Neelam, Y.Chitti Babu, Dr .P. Harini

Abstract

Cloud computing has important idea for the privacy of personal information as well as for the secrecy of business information. The location of information in the cloud may have important effects on the privacy and confidentiality protections of information. Especially for Sensitive Areas like Hospital Management System. We provide a design and implementation of self-protecting Client medical records (CMRs) using Ciphertext-Policy attribute-based encryption (CP-ABE). However, before cloud-based CMR systems can become a reality, issues of data security, patient privacy, and overall performance must be addressed. As standard encryption (including symmetric key and public-key)techniques for CMR encryption/decryption cause increased access control and performance overhead, the paper proposes the use of Ciphertext- Policy Attribute-Based Encryption (CP-ABE) to encrypt CMRs based on healthcare providers’ attributes or credentials; to decrypt CMRs, they must possess the set of at- tributes needed for proper access. This Method is not only is it inexpensive but it also provides the flexible, wide-area mobile access increasingly needed in the modern world.
Full Paper

IJCST/44/1/D-1763
    33 An Efficient Caching Scheme and Consistency Maintenance in Hybrid P2P System
Nagilla Rajalaxmi, GVNKV SUBBA RAO

Abstract

Peer-to-peer overlay networks are widely used in distributed systems. P2P networks can be divided into two categories: structured peer-topeer networks in which peers are connected by a regular topology, and unstructured peer-to-peer networks in which the topology is arbitrary. The objective of this work is to design a hybrid peerto- peer system for distributed data sharing which combines the advantages of both types of peer-to-peer networks and minimizes their disadvantages. Consistency maintenance is propagating the updates from a primary file to its replica. Adaptive Consistency Maintenance Algorithm (ACMA) maintains that periodically polls the file owner to update the file due to minimum number of replicas consistency overhead is very low. Top Caching (TC) algorithm helps to boost the system performance and to build a fully distributed cache for most popular information. Our caching scheme can deliver lower query delay, better load balance and higher cache hit ratios. It effectively relieves the over-caching problems for the most popular objects.
Full Paper

IJCST/44/1/D-1764
    34 Scalable and Robust Detection of the Breakages in Wireless Networks
D.Sravanth, Y.Chitti Babu, Dr .P. Harini

Abstract

In this Paper, We propose a distributed algorithm that allows every node to monitor the topology of the (initially connected) graph and detect if a cut occurs. For reasons that will be clear soon, one node of the network is denoted as the “source node”. The algorithm consists of every node updating a local state periodically by communicating with its nearest neighbors. The state of a node converges to a positive value in the absence of a cut. If a node is rendered disconnected from the source as a result of a cut, its state converges to 0. By monitoring its state, therefore, a node can determine if it has been separated from the source node. In addition, the nodes that are still connected to the source are able to detect that, one, a cut has occurred somewhere in the network, and two, they are still connected to the source node. The algorithm is iterative, a faster convergence rate is desirable for it to be effective. The convergence rate of the proposed algorithm is not only quite fast, but is independent of the size of the network. As a result, the delay between the occurrence of a cut and its detection by all the nodes can be made independent of the size of the network. This last feature makes the algorithm highly scalable to large sensor networks.
Full Paper

IJCST/44/1/D-1765
    35 Service Oriented Web Data Mining Using Xml
Reetesh Rai, Nitin Shukla

Abstract

With the rapid development of science and technology, it will be competitive trends of modem society that a large number of the Internet information is analyzed in real time and multi-level. In view of the Web with the characteristics: openness, dynamic nature, heterogeneity and so on, Accurately finding the information you need from the scattered and unified management of massive amounts of data become a difficulty solve by Web mining. However, Web-oriented data mining is more complex than for a single data warehouse. The WWW environment based on XML is the face of Web data. XML can be compatible with existing Web applications and Web information sharing and exchanging. Due to the emergence of XML technology, it provides a standard for data exchange on the Internet. At the same time, from the perspective of data, XML technology provides that can represent the means of the data content and meaning. So data mining based on XML technology provides new opportunities for data mining researching. The algorithm being proposed is a dynamic and novice algorithm for the web data mining using XML. The proposed work shall provide an opportunity to the user to select the particular domain of web data as per his requirements and the implantation work shall apply the mining on the XML data converted from the web data. Web is a major resource where data changes rapidly with time and therefore they are dynamic in nature. Web data is utilized by the users, advertisers, and search engines and therefore researchers have lot of responsibilities to provide fast and efficient mechanism for providing required information retrieval algorithms. This work is offering to create clusters dynamically from web data and XML.
Full Paper

IJCST/44/1/D-1766
    36 An Efficient & Scalable Multi Frequency Scheduling to Eliminating Interference Links in Wireless Sensor Networks
P. Vasntha, Y.Chitti Babu, Dr .P. Harini

Abstract

A wireless sensor consists of a small processor, memory, power, sensing and transceiver units. Additionally, a sensor can have location finding system, mobilizer and a power generator which are application dependent sub-units [5]. The size and weight of a sensor limits the processing capability, amount of memory and the amount of power that it can store. A Major part of power consumed by a sensor is used to run the transceiver circuitry. As, the transmission range of a sensor increases the power consumed by the transceiver also increases. Since they have limited transmission range sensors together form a multi-hop radio network to accomplish communication amongst themselves. Collisions occur in a wireless network when multiple nodes simultaneously transmit to the same node over the same channel or a receiver is in the transmission range of another communication taking place over the same channel. Such collisions waste resources (e.g. bandwidth and energy) as well as increases data latency and hence they are undesirable. For broadcast and convergecast to work in a collision-free manner. In this paper we propose a CFCSA(Collision-Free Converge casting Scheduling Algorithm), an efficient spanning tree for data collection, transmission and allocating channels in wireless communication. We also evaluate the performance of various channel assignment methods of multifrequency scheduling to eliminate most of the interference. The spanning tree constructed shows significant improvement in scheduling performance over different deployment densities.
Full Paper

IJCST/44/1/D-1767
    37 Efficiently Prevent the Jamming Attacks in Wireless Sensor Networks
C.P.Shivakrishna, B.V.Praveen Kumar

Abstract

Understanding and defending against jamming attacks has long been a problem of interest in wireless communication and radar systems. In wireless ad hoc and sensor networks using multihop communication, the effects of jamming at the physical layer resonate into the higher layer protocols, for example by increasing collisions and contention at the MAC layer, interfering with route discovery at the network layer, increasing latency and impacting rate control at the transport layer, and halting or freezing at the application layer. Adversaries that are aware of higher-layer functionality can leverage any available information to improve the impact or reduce the resource requirement for attack success. For example, jammers can synchronize their attacks with MAC protocol steps, focus attacks in specific geographic locations, or target packets from specific applications. In this work, we address the problem of selective jamming attacks in wireless networks. In these attacks, the adversary is active only for a short period of time, selectively targeting messages of high importance. We illustrate the advantages of selective jamming in terms of network performance degradation and adversary effort by presenting two case studies; a selective attack on TCP and one on routing. We show that selective jamming attacks can be launched by performing real-time packet classification at the physical layer. To mitigate these attacks, we develop three schemes that prevent real-time packet classification by combining cryptographic primitives with physical-layer attributes. We analyse the security of our methods By Adding Public Key Encryption algorithms (e.g., RSA) and evaluate their computational and communication overhead.
Full Paper

IJCST/44/1/D-1768
    38 A Regularization Algorithm for Building Ranking Model
G Saroja Deepthi, P Suresh Babu

Abstract

Building a ranking model for each domain is a laborious and time consuming task which directly affect the search process in the web. In this paper, we are proposing a regularization based algorithm called ranking adaptation SVM (RA-SVM), to overcome this difficulties. This allows us to adapt an existing ranking model to a new domain, so that the amount of labeled data and the training cost is reduced and retain the performance as desired. This algorithm requires only the prediction from the existing ranking models it does not require the internal representations. Experiments performed over different dataset to demonstrate the application of our method and its performance.
Full Paper

IJCST/44/1/D-1769
    39 An Efficient & Secure Decentralized Key Policy CP-Attribute Based Encryption
Satish Kumar Garapati, Rambabu Pemula, K.Anitha

Abstract

In reality, it is impossible for attributes to be monitored by one authority. In MA-ABE, universal attributes set are divided into several domains and managed by corresponding authorities. A user will issue his attributes to all the authorities to get his decryption key. Decentralizing ABE is a special MA-ABE and it does not require a trusted central authority to conduct the system setup. In our system, any party can become an authority and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. A party can simply act as an ABE authority by creating a public key and issuing private keys to different users that reflect their attributes. A user can encrypt data in terms of any Boolean formula over attributes issued from any chosen set of authorities. Finally, our system does not require any central authority. In constructing our system, our largest technical hurdle is to make it collusion resistant. Prior Attribute-Based Encryption systems achieved collusion resistance when the ABE system authority”tied” together different components (representing different attributes) of a user’s private key by randomizing the key. However, in our system each component will come from a potentially different authority, where we assume no coordination between such authorities. We create new techniques to tie key components together and prevent collusion attacks between users with different global identifiers.
Full Paper

IJCST/44/1/D-1770
    40 Quality Driven Assurance for Dynamic Reconfiguration of Components
G. Sailaja, P.Harini, K.Eswar

Abstract

In static approach, a typical software update occurs by stopping a system to be updated by performing update of code and restarting the system. The system is not active during reconfiguration. The system gets to reboot when there is any updating made to the system. In this paper propose a Dynamic reconfiguration technique is introduced to maintain Quality of Service, which is meant to reduce application disruption during the system transformation. This dynamic reconfiguration technique involves the ability to change the system’s functionality or topology while the system is running. This technique involves safe dynamic reconfiguration such as insertion, removal and replacement of components. Our major challenge for this dynamic reconfiguration technique is to maintain the Quality of Service during system transformation, which is been achieved. The true benefit of this technique is application consistency and service continuity. The motivation for this dynamic reconfiguration technique is adaptability and high availability, which are both Quality of Service driven characteristics. An adaptive system is capable of runtime reconfiguration and works in unanticipated environments. In order to achieve high reliability and availability [3], the distributed component software has to support dynamic reconfiguration in order to avoid downtimes caused by rebooting the system.
Full Paper

IJCST/44/1/D-1771