IJCST Logo

 

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY (IJCST)-VOL III ISSUE III, VER. 2, JULY TO SEPTEMBER, 2012


International Journal of Computer Science and Technology Vol. 3 Issue 3, Ver. 2
S.No. Research Topic Paper ID
   44 Monitoring Cryptographic Strength of Wireless Sensor Networks using ZKP

Arunakumari Bellamkonda, K. R. Rammohan Rao

Abstract

The security mechanisms used for wired networks cannot be directly used in sensor networks as there is no user-controlling of each individual node, wireless environment, and more importantly, scarce energy resources. In this paper, we address some of the special security threats and attacks in Wireless Sensor Networks offer a powerful methodology to monitor environments, and have a lot of interesting applications, We propose a scheme for detection of distributed sensor cloning attack and use of zero knowledge protocol for verifying the authenticity of the sender sensor nodes. The cloning attack is addressed by attaching a unique fingerprint to each node that depends on the set of neighboring nodes and itself. The fingerprint is attached with every message a sensor node sends. The Zero knowledge protocol is used to ensure non transmission of crucial cryptographic information in the wireless network in order to avoid man-in-the middle attack and replay attack. The paper presents a detailed analysis for various scenarios and also analyzes the performance and cryptographic strength.
Full Paper

IJCST/33/2/
A-896
   45 Efficient Cost and Secure Wireless Sensor Network using Virtual Energy Encryption and Key(VEEK)

Dr. T. K. Ramakrishna Rao, E. Raveendra Reddy

Abstract

Communication is very costly for Wireless Sensor Networks (WSNs) and for certain WSN applications. Independent of the goal of saving energy, it may be very important to minimize the exchange of messages (e.g., military scenarios). To address these concerns, we presented a secure communication framework for WSNs called Virtual Energy-Based Encryption and Keying.In comparison with other key management schemes,VEEK has the following benefits: 1) it does not exchange control messages for key renewals and is therefore able to save more energy and is less chatty, 2) it uses one key per message so successive packets of the stream use different keys—making VEEK more resilient to certain attacks (e.g., replay attacks, brute-force attacks, and masquerade attacks), and 3) it unbundles key generation from security services, providing a flexible modular architecture that allows for an easy adoption of different key-based encryption or hashing schemes.
Full Paper

IJCST/33/2/
A-897
   46 Security Architecture Implemented in Cloud Computing using Linear Programming

K. Ravindra, K. Saritha

Abstract

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (example:networks, servers , storage ,applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. To protect a customer confidential data in a cloud is become a major concern In this paper, we present a secure outsourcing mechanism for solving large-scale systems of linear programming computation in cloud. In order to achieve practical efficiency, our mechanism design explicitly decomposes the LP computation outsourcing into public LP solvers running on the cloud and private LP parameters owned by the customer. In particular, by formulating private data owned by the customer for LP problem as a set of matrices and vectors, we are able to develop a set of efficient privacy-preserving problem transformation techniques, which allow customers to transform original LP problem into some arbitrary one while protecting sensitive input/output information. To validate the computation result,we further explore the fundamental duality theorem of LP computation and derive the necessary and sufficient conditions that correct result must satisfy. Such result verification mechanism is extremely efficient and incurs close-to-zero additional cost on both cloud server and customers.
Full Paper

IJCST/33/2/
A-898
   47 Evaluating the Working of Blocking Misbehaving Users in Anonymous N/Ws

Lakshmi Jammulamadaka, D. Kumar, P. Pedda Sadhu Naik

Abstract

Several anonymous authentication schemes allow servers to revoke a misbehaving user’s ability to make future accesses. Traditionally, these schemes have relied on powerful, capable of deanonymizing (or linking) users’ connections. Recent schemes such as Blacklist able Anonymous Credencetials and Enhanced Privacy ID support \privacy-enhanced revocation” | servers can revoke misbehaving users without a tor involvement, and without learning the revoked users’ identities. In BLAC and EPID, however, the computation required for authentication at the server is linear in the size of the revocation list. We propose a new anonymous authentication scheme for which this bottleneck computation is independent of the size of the revocation list. Instead, the time complexity of authentication is linear in the size of a revocation window, the number of subsequent authentications before which a user’s misbehavior must be recognized if the user is to be revoked. We prove the security of our construction, and have developed a prototype implementation to validate its eminency experimentally.
Full Paper

IJCST/33/2/
A-899
   48 Aliased Detection Mode for Detecting Clone Attacks in Wireless Sensor

M. Nagarani, M. Shirisha

Abstract

A central problem in sensor network security is that sensors are susceptible to physical capture attacks. Once a sensor is compromised, the adversary can easily launch clone attacks by replicating the compromised node, distributing the clones throughout the network, and starting a variety of insider attacks. In this paper, we consider fingerprint generation. The fingerprint verification is conducted at both the base station and the neighboring sensors, which ensures a high detection probability. In this paper we are extending this approach by introducing new enhancement called Alias Detection Mode(ADM), our system extends the fingerprint generation, which maintains the node behavior which maintains the node sensitivity by specifying automatic range values determined by the network modulation at the node creation. The system expects the range of the node while node entering into the sensor network. Range values incorporate the fingerprint of each sensor which leads to detect the clone attack in the sensor network. The security and performance analysis indicate that our extended fingerprint generation algorithm can identify clone attacks with a high detection probability at the cost of a low computation/ Communication/storage overhead.
Full Paper

IJCST/33/2/
A-900
   49 Data Stream Intrusion Alert Aggregation for Online and Offline

K. V. Nagamani, J. Rajanikiran

Abstract

The proposal of this work present an efficient intrusion alert aggregation strategy for distributed heterogeneous sources. The primary objective is to generate meta-alerts using probabilistic technique with offline and online alert aggregation. The proposed approach has the distinct properties i.e., a generative modeling approach using probabilistic methods. Assume that attack instances can be regarded as random processes producing alerts. Model these processes using approximate maximum likelihood parameter estimation techniques. Thus, the beginning as well as the completion of attack instances can be detected. It is a data stream approach, i.e., each observed alert is processed only a few times. Thus, it can be applied on-line and under harsh timing constraints. The main objective is to identify and to cluster different alert produced by low-level intrusion detection systems, firewalls, etc. belonging to a specific attack instance which has been initiated by an attacker at a certain point in time. Thus, meta-alerts can be generated for the clusters that contain all the relevant information whereas the amount of data (i.e., alerts) can be reduced substantially. Meta-alerts are the basis for reporting to security experts or for communication within a distributed intrusion detection system. With benchmark data sets the proposal of this work demonstrate that it is possible to achieve reduction rates of up to 97% while the number of missing meta-alerts is extremely low. In addition, meta-alerts are generated with a delay of typically only a few seconds after observing the first alert belonging to a new attack instance.
Full Paper

IJCST/33/2/
A-901
   50 Wireless Sensor Network Collecting Secure Data using Randomized Routes

Ch. Hemanand, J. Rajakala, M. Satish Kumar

Abstract

Among the various possible threats in the WSN’s, like node failure, data security, etc., we are presenting this paper in order to circumvent or overcome the ‘blackholes’ that are formed due to Compromised-Node (CN) and denial-of-service (Denial of service) by using some routing mechanisms. Basic idea of developing this paper is nothing but combat the vulnerability of existing system in handling such attacks due to their deterministic nature i.e., once an obstructionist can gather or acquire the routing algorithm can figure out the same routes known to the source, and hence intimidate all information sent over these routes. We have developed a structure that generates randomized routes. Under this design the routes taken by the “shares” of different packets change over time to time increasing the probability of randomness of paths to be selected for transferring the information. Therefore, even though adversary or offender comes to know about the routing algorithm still he cannot pinpoint the routes in where each packet is traversed randomly. Apart from randomness, the routes that are generated by our mechanisms are energy efficient as well as dispersive which ultimately make them capable of circumventing the blackholes at less energy cost. Extensive frameworks are conducted to verify the validity of our mechanisms.
Full Paper

IJCST/33/2/
A-902
   51 Efficient Energy False Report Detection Algorithm in Wireless Sensor Network

S. Ratalu, T. SudhaRani

Abstract

Intruders can inject bogus reports via compromised nodes and launch DoS attack against legitimate reports in wireless sensor networks (WSNs). For many applications in wireless sensor networks, users may want to continuously extract data from the networks for analysis later. In this paper, energy efficient Sleep/ awake scheduling algorithm is proposed along with the dynamic en-route filtering scheme. As the sensor nodes are allowed to sleep periodically under certain condition, it reduces the energy consumption of all nodes including cluster head. The cluster head no need of collecting all the sensor nodes data instead they need only the cluster member that are awake. The dynamic enroute filtering scheme addresses both false report injection and DoS attacks in wireless sensor networks. Each node send its key to forwarding nodes and then disclose their keys by verify their reports using forwarding nodes. In Hill Climbing, key dissemination approach ensures the nodes closer to data sources has stronger filtering capacity and also drops fabricated reports en-route without symmetric key sharing. Thus this approach is used to achieve stronger security protection. The cluster head is required to collect the data from all the sensor node, which makes it overloaded all the time hence Sleep/awake scheduling algorithm is used which will avoid this problem.
Full Paper

IJCST/33/2/
A-903
   52 Peer-to-Peer Environment – Indexing of Query on Spatial Data

S. Srinivasa Rao, P. Subba Rao

Abstract

The unprecedented growth and increased importance of geographically distributed spatial data has created a strong need for efficient sharing of such data among users. Interestingly, the ever-increasing popularity of peer-to-peer systems has opened exciting possibilities for such sharing.This motivates our investigation int spatial indexing in P2P systems. In this paper the framework SPATIAL P2P which they have proposed maintains the multidimensionality of the spatial data. SPATIALP2P supports dynamic insertion and deletion of spatial information of various sizes and dynamic joining and leaving of peers. The peer tree indexing structure performs the search process very efficiently without the need of load balancing and provides efficient storing, indexing, and searching services by preserving locality and directionality. As a result, SPATIALP2P performs exceptionally well for point and range query operations.
Full Paper

IJCST/33/2/
A-904
   53 To Distribute the Total Traffic Among the Available Paths Using Multipath-Path Routing Protocol

P. Adinarayana, G. Aparna

Abstract

Multiple-path source routing protocols allow a data source node to distribute the total traffic among available paths. In this article, we consider the problem of jamming-aware source routing in which the source node performs traffic allocation based on empirical jamming statistics at individual network nodes. We formulate this traffic allocation as a lossy network flow optimization problem using portfolio selection theory from financial statistics. We show that in multi-source networks, this centralized optimization problem can be solved using a distributed algorithm based on decomposition in Network Utility Maximization (NUM). We demonstrate the network’s ability to estimate the impact of jamming and incorporate these estimates into the traffic allocation problem. Finally, we simulate the achievable throughput using our proposed traffic allocation method in several scenarios.
Full Paper

IJCST/33/2/
A-905
   54 Effective Link Utilization in Data Center Servers using Ficonn

R. Neelima, D. Srinivas

Abstract

A Data center network is an interconnection of servers. The tree hierarchy connection of servers that is currently being practiced, uses only one of the two Ethernet ports that each of the server has. This does not provide high network capacity and robustness to link/node failure. In this paper, we see a novel approach to build the network wherein both the server ports are actively used for connectivity, instead of keeping one of the ports as backup. This approach provides better scalability as well as makes the networks more effective in handling node crashes or communication link failure. We design such a networking structure called FiConn [1]. We have further developed a low-overhead traffic-aware routing mechanism to improve effective link utilization based on dynamic traffic state.
Full Paper

IJCST/33/2/
A-906
   55 Automatic Camouflaging Worm Detection Over Internet

N. Surya Prakash Raju, M. Anil Kumar

Abstract

Active worms pose major security threats to the Internet. This is due to the ability of active worms to propagate in an automated fashion as they continuously compromise computers on the Internet. Active worms evolve during their propagation and thus pose great challenges to defend against them. In this paper, we investigate a new class of active worms, referred to as Camouflaging Worm (C-Worm in short). The C-Worm is different from traditional worms because of its ability to intelligently manipulate its scan traffic volume over time. Thereby, the C-Worm camouflages its propagation from existing worm detection systems based on analyzing the propagation traffic generated by worms. We analyze characteristics of the C-Worm and conduct a comprehensive comparison between its traffic and non-worm traffic (background traffic). We observe that these two types of traffic are barely distinguishable in the time domain. However, their distinction is clear in the frequency domain, due to the recurring manipulative nature of the C-Worm. Motivated by our observations, we design automatic detection of C-worm and specifying priorities to the different worms over the internet and the proposed system automatically applied on the worm according to the priority of the of the worm without disturbing their the real work. Our scheme uses the Power Spectral Density (PSD) distribution of the scan traffic volume and its corresponding Spectral Flatness Measure (SFM) to distinguish the C-Worm traffic from background traffic. Using a comprehensive set of detection metrics and real-world traces as background traffic, we conduct extensive performance evaluations on our proposed spectrum-based detection scheme. The performance data clearly demonstrates that our scheme can effectively detect the C-Worm propagation.
Full Paper

IJCST/33/2/
A-907
   56 Horizontal Aggregations in SQL to Prepare Data Sets for Data Mining Analysis

M. Sunil Kumar, N.Surya Prakash Raju

Abstract

Horizontal aggregation is new class of function to return aggregated columns in a horizontal layout. Most algorithms require datasets with horizontal layout as input with several records and one variable or dimensions per columns. Managing large data sets without DBMS support can be a difficult task. Trying different subsets of data points and dimensions is more flexible, faster and easier to do inside a relational database with SQL queries than outside with alternative tool. Horizontal aggregation can be performing by using operator, it can easily be implemented inside a query processor, much like a select, project and join. PIVOT operator on tabular data that exchange rows, enable data transformations useful in data modelling, data analysis, and data presentation. horizontal aggregation represent a template to generate SQL code from a data mining tool. This SQL code reduces manual work in the data preparation phase in data mining related project.Conventional RDBMS usually manage tables with vertical form. Aggregated columns in a horizontal tabular layout returns set of numbers, instead of one number per row. The system uses one parent table and different child tables, operations are then performed on the data loaded from multiple tables. PIVOT operator, offered by RDBMS is used to calculate aggregate operations. PIVOT method is much faster method and offers much scalability. Partitioning large set of data, obtained from the result of horizontal aggregation, in to homogeneous cluster is important task in this system. K-means algorithm using SQL is best suited for implementing this operation.
Full Paper

IJCST/33/2/
A-908
   57 A New Technique for Scalable and Efficient Data Operation in Cloud

K. Sri Kishore, M. BalaKrishna

Abstract

Cloud computing refers to the use and access of multiple serverbased computational resources via a digital network (WAN, Internet connection using the World Wide Web, etc.). Cloud users may access the server resources using a computer or other device. In cloud computing, applications are provided and managed by the cloud server and data is also stored remotely in the cloud configuration. Users do not download and install applications on their own device or computer; all processing and storage is maintained by the cloud server.Security is the major problem in Cloud as its open to many users. In order to maintain the integrity of data in cloud we are providing public audit ability to the network. This can be done by third party auditor (TPA) on behalf of the cloud client to verify the integrity of the dynamic data stored in the cloud. This will eliminates the interference of Client to check their intactness which can be important in achieving economies of scale for Cloud Computing. The forms of data operation such as block modification, insertion, and deletion is also a significant in the cloud function which will be done here. We detects the security problems of direct extensions with dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure
Full Paper

IJCST/33/2/
A-909
   58 Privacy Preserving Location Monitoring System for Wireless Sensor Network Algorithm

M. A. Ahmedsha, M. M. Balakrishna

Abstract

The location monitoring system using identity sensors, the sensor nodes report the exact location information of the monitored persons to the server; thus using identity sensors immediately poses a major privacy breach. In this paper, we propose a privacypreserving location monitoring system for wireless sensor networks. We design two in-network location anonymization algorithms, namely, resource- and quality-aware algorithms, that preserve personal location privacy, while enabling the system to provide location monitoring services. Both algorithms rely on the well established k-anonymity privacy concept that requires a person is indistinguishable among k persons. In our system, sensor nodes execute our location anonymization algorithms to provide kanonymous aggregate locations, in which each aggregate location is a cloaked area A with the number of monitored objects, N, located in A, where N _ k, for the system. The resource-aware algorithm aims to minimize communication and computational cost, while the quality-aware algorithm aims to minimize the size of cloaked areas in order to generate more accurate aggregate locations.
Full Paper

IJCST/33/2/
A-910
   59 Maximizing the System Lifetime of Query Based Wireless Sensor Network: QOS Control Algorithm

A. Kumara Swami, N. Surya Prakash Raju

Abstract

Wireless Sensor Networks (WSNs) present several unique characteristics such as resource-constrained sensors, random deployment, and data-centric communication protocols. These characteristics pose unprecedented challenges in the area of query processing in WSNs. This dissertation presents the design and validation of adaptive fault tolerant QoS control algorithms with the objective to achieve the desired Quality Of Service (QoS) requirements and maximize the system lifetime in query-based WSN Data sensing and retrieval in WSNs have a great applicability in military, environmental, medical, home and commercial applications. In query-based WSNs, a user would issue a query with QoS requirements in terms of reliability and timeliness, and expect a correct response to be returned within the deadline. Satisfying these QoS requirements requires that fault tolerance mechanisms through redundancy be used, which may cause the energy of the system to deplete quickly. We analyze the effect of redundancy on the mean time to failure (MTTF) of query based cluster-structured WSNs, defined as the mean number of queries that a WSN is able to answer correctly until it fails due to channel faults, sensor faults, or sensor energy depletion. We show that a tradeoff exists between redundancy and MTTF. Furthermore, an optimal redundancy level exists such that the MTTF of the system is maximized In this paper, we develop adaptive fault tolerant quality of service (QoS) control algorithms based on hop-by-hop data delivery utilizing “source” and “path” redundancy, with the goal to satisfy application QoS requirements while prolonging the lifetime of the sensor system. We develop a mathematical model for the lifetime of the sensor system as a function of system parameters including the “source” and “path” redundancy levels utilized. We discover that there exists optimal “source” and “path” redundancy under which the lifetime of the system is maximized while satisfying application QoS requirements. Numerical data are presented and validated through extensive simulation, with physical interpretations given, to demonstrate the feasibility of our algorithm design.
Full Paper

IJCST/33/2/
A-911
   60 FEC & Turbo Code Decoding Using BP Algorithm in LTE and WiMAX Systems

M. V. V. Rama Rao, Gumpula Raju

Abstract

Some wireless message systems such as IS- 54, improved data rates for the GSM evolution (EDGE), universal interoperability for microwave access (WiMAX) and long term evolution (LTE) have adopted low-density parity-check (LDPC), tail-biting convolutional, and turbo codes as the forward error correcting codes (FEC) scheme for data and overhead channels. Therefore, many efficient algorithms have been proposed for decoding these codes. However, the different decoding approaches for these two families of codes usually lead to different hardware architectures. Since these codes work side by side in these new wireless systems, it is a good idea to introduce a universal decoder to handle these two families of codes. The present work exploits the parity-check matrix (H) representation of tailbiting convolutional and turbo codes, thus enabling decoding via a unified belief propagation (BP) algorithm. Indeed, the BP algorithm provides a highly effective general methodology for devising low-complexity iterative decoding algorithms for all convolutional code classes as well as turbo codes. While a small performance loss is observed when decoding turbo codes with BP instead of MAP, this is offset by the lower complexity of the BP algorithm and the inherent advantage of a unified decoding architecture.
Full Paper

IJCST/33/2/
A-912
   61 Trivial Support For Continous Queries in Formless Peer-To-Peer Networks

M. Chandrasekhar Varma, M. M. Balakrishna

Abstract

This paper presents CoQUOS a scalable and lightweight middleware to support continuous queries in unstructured P2P networks. A key strength of the CoQUOS system is that it can be implemented on any unstructured overlay network. Moreover,CoQUOS preserves the simplicity and flexibility of the overlaynetwork.The CoQUOS middleware is a completely decentralized scheme to register a query at different regions of the P2P network. This mechanism includes two novel components, namely cluster resilient random walk and dynamic probability-basedquery registration technique The loosely-coupled and highly dynamic nature of underlying P2P network poses several additional challenges. This paper focuses on the issues that are of particular importance to the performance of the CoQUOS system, namely Churn of the P2P overlay and Load distribution among peers.
Full Paper

IJCST/33/2/
A-913
   62 Privacy Protection in Location Based Mobile Users: KAWCR

M. Gopinath Reddy, Gumpula Raju

Abstract

The emerging location-detection devices together with ubiquitous connectivity have enabled a large variety of Location Based Services (LBS). Unfortunately, LBS may threaten the users’ privacy. K-anonymity cloaking the user location to K-anonymizing spatial region (K-ASR) has been extensively studied to protect privacy in LBS. Traditional K-anonymity method needs complex query processing algorithms at the server side. SpaceTwist [8] rectifies the above shortcoming of traditional K anonymity since it only requires incremental nearest neighbor (INN) queries processing techniques at the server side. However, SpaceTwist may fail since it cannot guarantee K-anonymity. In this paper, our proposed framework, called KAWCR (K anonymity Without Cloaked Region), rectifies the shortcomings and retains the advantages of the above two techniques. KAWCR only needs the server to process INN queries and can guarantee that the users issuing the query is indistinguishable from at least K-1 other users. The extensive experimental results show that the communication cost of KAWCR for kNN queries is lower than that of both traditional K-anonymity and SpaceTwist.
Full Paper

IJCST/33/2/
A-914
   63 Force-Capable Protocol For Joint Networks

G. Ganesh Sriram, CH. Raja Jacob

Abstract

An Energy efficiency reduction and Reliability is an Important issue of wireless group. We propose a protocol for joint communication for the low energy dissipation and reliability in the network. We model a joint path and then establishing joint protocol for the establishment of nodes as Clusters and for joint transmission of data. We analyze the End-to-end robustness of the protocol to datapacket loss, along with the tradeoff between energy consumption and error rate. The analysis Results are used to compare the energy savings and the end-to-end Robustness of our protocol with two non-joint schemes, as well as to another joint protocol. It shows that the nodes placed in the grid saves up to 80% of energy, when the nodes are random, it saves up to 40% of energy, which increases the lifetime and reduces the error rate of the wireless network. The comparison results show that, when nodes are positioned on a grid, there is a reduction in the probability of packet delivery failure by two orders of magnitude for the values of joint sensor networks.
Full Paper

IJCST/33/2/
A-915
   64 Uninterrupted Detection of Segment Nodes in Wireless Sensor Networks

K. Hari Prasad, D. Srinivas

Abstract

Neighbor discovery is an important task in wireless networks, and especially in sensor networks. Neighbor information can be used to improve routing, clustering and scheduling algorithms. A sensor network may contain a huge number of simple sensor nodes that are deployed at some inspected t site. In large areas, such a network usually has a mesh structure. In this case, some of the sensor nodes act as routers, forwarding messages from one of their neighbors to another. The nodes are configured to turn their communication hardware on and off to minimize energy consumption. Therefore, in order for two neighboring sensors to communicate, both must be in active mode. In the sensor network model considered in this paper, the nodes are placed randomly over the area of interest and their first step is to detect their immediate neighbors – the nodes with which they have a direct wireless communication – and to establish routes to the gateway. In networks with incessantly heavy traffic, the sensors need not invoke any special neighbor detection protocol during normal operation. This is because any new node, or a node that has lost connectivity to its neighbors, can hear its neighbors simply by listening to the channel for a short time. However, for sensor networks with low and irregular traffic, a special neighbor detection scheme should be used.
Full Paper

IJCST/33/2/
A-916
   65 Improving Efficiency and Privacy of Moving Objects

D. Anil Kumar, M. RajaBabu

Abstract

Object databases have gained much interest due to the advances in mobile communications and positioning technologies. To provide security to moving object need monitoring . Monitoring moving objects have two fundamentals issues like efficiency and
privacy. This paper proposes an efficient algorithm for finding good anonymization group for given objects (MOB) with respect to its Quasi-Identifier (QID) and dynamic query updation. To address the efficiency and privacy issues we are using a new technique which updates the query of the mOB in the base stations automatically. Furthermore , by using the location management method we are solving the issues of location updating.
Full Paper

IJCST/33/2/
A-917
   66 Implementation of Skyline Sweeping Algorithm

B. R. Ambedkar Kota, G. Jayaraju

Abstract

Searching keywords in databases is complex task than search in files. Information Retrieval (IR) process search keywords from text files and it is very important that queering keyword to the relational databases. Generally to retrieve data from relational database Structure Query Language(SQL) can be used to find relevant records from the database. There is natural demand for relation database to support effective and efficient IR Style Keyword queries. This paper describes problem of supporting effective and efficient top-k keyword search in relational databases also describe the frame word which takes keywords and K as inputs and generates top-k relevant records .The results of implemented system with Skyline Sweeping(S.S) Algorithm shows that it is one effective and efficient style of keyword search.
Full Paper

IJCST/33/2/
A-918
   67 Data Preprocessing Using Non Negative Matrix Factorization for Multimedia Mining

K. V. Rajesh, N. V. Satya Narayana

Abstract

Matrix factorization techniques have been frequently applied in information retrieval, computer vision and pattern recognition. Among them, Non-negative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts-based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low dimensional manifold embedded in a high dimensional ambient space. One hopes then to find a compact representation which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. This Paper presents novel algorithm, called Graph Regularized Non-negative Matrix Factorization of image for multimedia Mining (GNMFMM), In GNMFMM, an affinity graph is constructed to encode the image for multimedia, and seek a matrix factorization which respects the graph structure.
Full Paper

IJCST/33/2/
A-919
   68 Randomized Unit Testing Using Feature Subset Selection

G. Rajesh, M. Nagaraju

Abstract

Software testing one of the improvement phase of software development life cycle and it consumes 40-60 % development cost. The research on testing mainly focuses how to minimize the cost of testing with scarifying quality of software. Greedy unit testing uses the randomization for some aspects of test input data selection. Unit testing is variously defined as testing of single method, methods or a class. Feature subset selection is to assess the size and content of the representation within the algorithms. The Generic algorithms used to set the parameters of testing such way it achieves full coverage with the randomization that’s way it is called Greedy Unit testing. This paper describes a system that uses the feature subset selection to find the parameters for Greedy Unit testing that optimize test coverage. The results the Greedy Unit testing using FSS could significantly optimize.
Full Paper

IJCST/33/2/
A-920
   69 Optimizing Design Cost of System By using Dynamic Programming

P. S. S. K. Sarma, N. V. Satya Narayana

Abstract

Search based software engineering is an emerging discipline that optimizing the cost of system design by using algorithmic search techniques. For example in the design article intelligence system that help to crime investigation designs. Use the minimal amount of computing between to reduce weight and lost while supporting training and reorganization task running on board. Hard ware and software (system) design is a process which indentify software and hardware knapsack. Dynamic programming is a problem solving technique which solves the optimization design cost. This paper provide how cost. Constrained problem can be modeled as set of two dimensional knapsack problems and provides a frame work and algorithm to optimize design cost of system. An experimental result showing that results reaches the maximum of optimization solution value.
Full Paper

IJCST/33/2/
A-921
   70 Removed Paper Due to Copy-Right Issue
   71 Actionable Knowledge Discovery using MSCAM
K. Surekha, P. Satya Narayana

Abstract

Actionable Association Rule mining (AAR) is a closed optimization problem solving process from problem definition, model design to actionable pattern discovery, and is designed to deliver apt business rules that can be integrated with business processes and technical aspects. To support such processes, we correspondingly propose, formalize and illustrate a generic AAR model design: Multisource Combined-Mining-based AAR (MSCM-AAR). In this paper, we present a view of actionable association rule (AAR) from the technical and decision-making perspectives. A real-life case study of MSCM-based AAR is demonstrated to extract debt prevention patterns from social security data. Substantial experiments show that the proposed model design are sufficiently general, flexible and practical to tackle many complex problems and applications by extracting actionable deliverables for instant decision making.
Full Paper

IJCST/33/2/
A-923
   72 Quality High Performance Over Infratructure Cloud Using Snapshots of Resources

B. Kalyan Chandra, D. Srinivas

Abstract

A key advantage of Infrastructure-as-a-Service (IaaS) clouds is providing users on-demand access to resources. However, to provide on-demand access, cloud providers must either significantly overprovision their infrastructure (and pay a high price for operating resources with low utilization) or reject a large proportion of user requests (in which case the access is no longer on-demand). At the same time, not all users require truly ondemand access to resources. Many applications and workflows are designed for recoverable systems where interruptions in service are expected. For instance, many scientists utilize High Throughput Computing (HTC)-enabled resources, such as Condor, where jobs are dispatched to available resources and terminated when the resource is no longer available. We propose a cloud infrastructure that combines on-demand allocation of resources introducing the snapshot of clouds using virtual machines. this mechanisms increases the throughput of on demand accesses of entire network. The performances of the sytem automatically increase by utilizing the snapshots of clouds. We demonstrate that a shared infrastructure between IaaS cloud providers and an HTC job management system can be highly beneficial to both the IaaS cloud provider and HTC users by increasing the utilization of the cloud infrastructure and contributing cycles that would otherwise be idle to processing HTC jobs.
Full Paper

IJCST/33/2/
A-924
   73 Multimodal Systems Used In Novel Approaches for Biometric Systems

R. V. Chandra Sekhar, D. Srinivas

Abstract

This uniqueness is not only prevalent in his/her biometric traits, but also in the way he/she interacts with a biometric device. A recent trend in tailoring a biometric system to each user (client) is by normalizing the match score for each claimed identity. This technique is called user- (or client-) specific score normalization. This concept can naturally be extended to the multimodal biometrics where several biometric devices and/or traits are involved. This chapter gives a survey on user-specific score normalization as well as compares several representative techniques in this research direction. It also shows how this technique can be used for designing an effective user-specific fusion classifier.. In this paper, we deal with some core issues related to the design of these systems and propose a novel modular framework, namely, novel approaches for biometric systems (NABS) that we have implemented to address them. NABS proposal encompasses two possible architectures based on the comparative speeds of the involved biometrics. It also provides a novel solution for the data normalization problem, with the new quasi-linear sigmoid (QLS) normalization function. This function can overcome a number of common limitations, according to the presented experimental comparisons. A further contribution is the system response reliability (SRR) index to measure response confidence.. The unified experimental setting aims at evaluating such aspects both separately and together, using face, ear, and fingerprint as test biometrics. The results provide a positive feedback for the overall theoretical framework developed herein. Since NABS is designed to allow both a flexible choice of the adopted architecture, and a variable compositions and/or substitution of its optional modules, i.e., QLS and SRR, it can support different operational settings.
Full Paper

IJCST/33/2/
A-925
   74 Efficient and Secure Multi-Keyword Search on Encrypted Cloud Data

Y. Prasanna, Ramesh

Abstract

As Cloud Computing becomes prevalent, sensitive information are being increasingly centralized into the cloud. For the protection of data privacy, sensitive data has to be encrypted before outsourcing, which makes effective data utilization a very challenging task. Although traditional searchable encryption schemes allow users to securely search over encrypted data through keywords, these techniques support only boolean search, without capturing any relevance of data files. This approach suffers from two main drawbacks when directly applied in the context of Cloud Computing. On the one hand, users, who do not necessarily have pre-knowledge of the encrypted cloud data, have to post process every retrieved file in order to find ones most matching their interest; On the other hand, invariably retrieving all files containing the queried keyword further incurs unnecessary network traffic, which is absolutely undesirable in today’s pay-as-you-use cloud paradigm. In this paper, for the first time we define and solve the problem of effective yet secure ranked keyword search over encrypted cloud data. Ranked search greatly enhances system usability by returning the matching files in a ranked order regarding to certain relevance criteria (e.g., keyword frequency), thus making one step closer towards practical deployment of privacy-preserving data hosting services in Cloud Computing. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted cloud data (MRSE), and establish a set of strict privacy requirements for such a secure cloud data utilization system to become a reality. Among various multi-keyword semantics, we choose the efficient principle of “coordinate matching”, i.e., as many matches as possible, to capture the similarity between search query and data documents, and further use “inner product similarity” to quantitatively formalize such principle for similarity measurement. We first propose a basic MRSE scheme using secure inner product computation, and then significantly improve it to meet different privacy requirements in two levels of threat models. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given, and experiments on the real-world dataset further show proposed schemes indeed introduce low overhead on computation and communication.
Full Paper

IJCST/33/2/
A-926
   75 Social Dimensions Based On Scalable Learning of Collective Behavior

Y. Rama Krishna, Ravi Kumar

Abstract

In the initial study, modularity maximization is exploited to extract social dimensions. With huge number of actors, the dimensions cannot even be held in memory. In this work, we propose an effective edge-centric approach to extract sparse social dimensions. The study of collective behavior is to understand how individuals behave in a social network environment. Oceans of data generated by social media like Facebook,Twitter, Flicker and YouTube present opportunities and challenges to studying collective behavior in a large scale. In this work, we aim to learn to predict collective behavior in social media. In particular, given information about some individuals, how can we infer the behavior of unobserved individuals in the same network? A social-dimension based approach is adopted to address the heterogeneity of connections presented in social media. However, the networks in social media are normally of colossal size, involving hundreds of thousands or even millions of actors. The scale of networks entails scalable learning of models for collective behavior prediction. To address the scalability issue, we propose an edge-centric clustering scheme to extract sparse social dimensions. With sparse social dimensions, the social dimension based approach can efficiently handle networks of millions of actors while demonstrating comparable prediction performance as other non-scalable methods.
Full Paper

IJCST/33/2/
A-927
   76 Automatic Suggestion of Kernel Data Structure Invariants

P. Sai Vijay, M. Chinna Rao

Abstract

In monolithic operating systems, the kernel is the piece of code that executes with the highest privileges and has control over all the software running on a host. A successful attack against an operating system’s kernel means a total and complete compromise of the running system. These attacks usually end with the installation of a rootkit, a stealthy piece of software running with kernel privileges. When a rootkit is present, no guarantees can be made about the correctness, privacy or isolation of the operating system. Kernel-level rootkits affect system security by modifying key kernel data structures to achieve a variety of malicious goals. While early rootkits modified control data structures, such as the system call table and values of function pointers, recent work has demonstrated rootkits that maliciously modify non-control data. Prior techniques for rootkit detection fail to identify such rootkits either because they focus solely on detecting control data modifications. This paper presents a novel rootkit detection technique that automatically detects rootkits that modify both control and noncontrol data. The key idea is to externally observe the execution of the kernel during a training period and hypothesize invariants on kernel data structures. These invariants are used as specifications of data structure integrity during an enforcement phase; violation of these invariants indicates the presence of a rootkit We present the design and implementation of Gibraltar, a tool that uses the above approach to infer and enforce invariants.
Full Paper

IJCST/33/2/
A-928
   77 Analysis of Neighboring Nodes in Wireless Sensor Asynchronous Network

G. Navyatha, T. Rajesh

Abstract

Neighbor discovery is one of the first steps in configuring and managing a wireless network. Most existing studies on neighbor discovery assume a single-packet reception model where only a single packet can be received successfully at a receiver. We exposed a new problem in wireless sensor networks, referred to as ongoing continuous neighbor discovery. We argue that continuous neighbor discovery is crucial even if the sensor nodes are static. If the nodes in a connected segment work together on this task, hidden nodes are guaranteed to be detected within a certain probability P and a certain time period T, with reduced expended on the detection We showed that our scheme works well if every node connected to a segment estimates the in-segment degree of its possible hidden neighbors. We then presented a continuous neighbor discovery algorithm that determines the frequency with which every node enters the HELLO period. We simulated a sensor network to analyze our algorithms and showed that when the hidden nodes are uniformly distributed in the area.Each sensor employs a simple protocol in a coordinate effort to reduce power consumption without increasing the time required to detect hidden sensors.
Full Paper

IJCST/33/2/
A-929
   78 An Efficient Method for Projection-Based Image Deblurring

E. Sambasiva Rao, S. M. Afroj

Abstract

Previously lot of problems are encountered in image processing – image deblurring. From the origination of the image deblurring to its broad applications in enormous number of areas today, the deblurring approaches have evolved with time and forked into many different and fascinating branches. In this paper, we propose a method for reducing out-of-focus blur caused by projector projection. We estimate the Point-Spread-Function (PSF) in the image projected onto the screen by using a camera that captures the projector screen. According to the estimated PSF, the original image is pre-corrected, so that the projected image can be deblurred. Proposed system can successfully reduce the effects of out-offocus projection blur, even though the screen image includes spatially varying blur without projecting the feature image.
Full Paper

IJCST/33/2/
A-930
   79 Alert Verification-Determining the Success of Online Intrusion

N. Sravani, M. Anuraghamayi

Abstract

Intrusion Detection System (NIDS) [8] that tries to detect malicious activity such as denial of service attacks, port scan or even attempts to crack into computer by monitoring network traffic.Network Intrusion detection is mainstream to identify alert aggregation and to cluster different alerts produced by low-level intrusion detection systems firewalls etc. Belonging to a specific attack instance which has been initiated by an attacker at a certain point in time, thus, meta-alerts can be generated for the clusters that contain all the relevant information whereas the amount of data (i.e., alerts) can be reduced substantially. Meta-alerts may then be the basis for reporting to security experts or for communication within a distributed intrusion detection system. We propose a novel technique for online alert aggregation which is based on a dynamic, probabilistic model of the current attack situation. Basically, it can be regarded as a data stream version of a maximum likelihood approach for the estimation of the model parameters. In addition, meta-alerts are generated with a delay of typically only a few seconds after observing the first alert belonging to a new attack instance.
Full Paper

IJCST/33/2/
A-931
   80 A New Framework for Data Processing in Cloud

C Subash Chandra, M Jyothi

Abstract

Cloud Computing is gaining acceptance in many IT organizations, as an elastic, flexible and variable-cost way to deploy their service platforms using outsourced resources. Major Cloud computing companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. However, the processing frameworks which are currently used have been designed for static, homogeneous cluster setups and disregard the particular nature of a cloud. In this paper, we discuss the opportunities and challenges for efficient parallel data processing in clouds and present our research project Nephele. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by today’s IaaS clouds for both, task scheduling and execution. Particular tasks of a processing job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution. Based on this new framework, we perform extended evaluations of MapReduce-inspired processing jobs on an IaaS cloud system and compare the results to the popular data processing framework
Hadoop.
Full Paper

IJCST/33/2/
A-932
   81 Efficiently Allocation of Traffic Across Multiple-Path Routing using Portfolio Selection

M. Rambabu, D. Koteswara Rao

Abstract

This project deal with the allocation of traffic across multiple routing paths and estimating the end to end packet success rates. It introduces a statistical characterization into the maximum network flow problem to compensate for the reduction in network flow due to the loss of jammed packets. Here the problem of throughput optimization under probabilistic jamming to that of optimal investment portfolio selection is mapped .It is shown that in multisource networks, this centralized optimization problem can be solved using a distributed algorithm based on decomposition in network utility maximization (NUM). The network’s ability to estimate the impact of jamming and incorporate these estimates into the traffic allocation problem is formulated. Finally, the achievable throughput using our proposed traffic allocation method in several scenarios is simulated.
Full Paper

IJCST/33/2/
A-933
   82 An Efficient and Robust Digital Image Watermarking based on MDKP

G. Mallikharjuna Rao, S. M. Afroj

Abstract

The growth of digitalization of data creates a need for protecting digital content against counterfeiting, piracy and malicious manipulations. Digital water marking is one of the methods of protecting multimedia data. Robustness is a critical requirement for a watermarking scheme to be practical. Especially, in order to resist geometric distortions a common way is to locally insert multiple redundant watermarks in the hope that partial watermarks could still be detected. In this paper, we propose a robust digital watermarking method which has greatest robustness against various attacks and preserves the image quality after water marking, i.e., it embeds a watermark into an image. The image is divided into blocks and each block is processed using Multi Dimensional Knapsack Problem (MDKP) and in turn converts to spatial domain. The proposed scheme exhibits better performance in robust digital watermarking.
Full Paper

IJCST/33/2/
A-934
   83 Ranking Snippet Results Using Lexical Pattern Based Web Databases

Ch. Gangadhar, Y. KumarSekhar

Abstract

Extracting aliases of an entity are important for various tasks such as identification of relations among entities, web search and entity disambiguation. To extract relations among entities properly, one must first identify those entities. We propose a novel approach to find aliases of a given name using automatically extracted lexical patterns. We exploit a set of known names and their aliases as training data and extract lexical patterns that convey information related to aliases of names from text snippets returned by a web search engine. The patterns are then used to find candidate aliases of a given name. We use anchor texts to design a word co-occurrence model and use it to define various ranking scores to measure the association between a name and a candidate alias. The ranking scores are integrated with page-count-based association measures using support vector machines to leverage a robust alias detection method. The proposed method outperforms numerous baselines and previous work on alias extraction on a dataset of personal names, achieving a statistically significant mean reciprocal rank of 0.6718. Experiments carried out using a dataset of location names and Japanese personal names suggest the possibility of extending the proposed method to extract aliases for different types of named entities and for other languages. Moreover, the aliases extracted using the proposed method improves recall by 20% in a relation detection task
Full Paper

IJCST/33/2/
A-935
   84 Spatial Neighbourhood Features for Ranking Objects

K. Maneendra Varma, D. Srinivas

Abstract

In this paper, we formally define spatial preference queries and propose appropriate indexing techniques and search algorithms for them. Extensive evaluation of our methods on both real and synthetic data reveals that an optimized branch-and-bound solution is efficient and robust with respect to different parameters A spatial preference query ranks objects based on the qualities of features in their spatial neighbourhood. For example, using a real estate agency database of flats for lease, a customer may want to rank the flats with respect to the appropriateness of their location, defined after aggregating the qualities of other features (e.g., restaurants, cafes, hospital, market, etc.) within their spatial neighbourhood. Such a neighbourhood concept can be specified by the user via different functions. It can be an explicit circular region within a given distance from the flat. Another intuitive definition is to assign higher weights to the features based on their proximity to the flat.
Full Paper

IJCST/33/2/
A-936
   85 Headed For Logical VLAN Plan of Activity Groups

Ramarao Thiruveedhi, T. Sai Durga

Abstract

Activity group are large and complex, and their plans must be frequently altered to adapt to changing organizational needs. The process of replanting and reconfiguring activity groups is ad-hoc and error-prone, and configuration errors could cause serious issues such as group outages. A VLAN is a logical grouping of devices or users. These devices or users can be grouped by function, department, or application, regardless of their physical segment location. VLAN configuration is done at the switch via software. VLANs are not standardized and require the use of proprietary software from the switch vendor. A typical LAN is configured according to the physical infrastructure it is connecting. Users are grouped based on their location in relation to the hub they are plugged in to and how the cable is run to the wiring closet. The router interconnecting each shared hub typically provides segmentation and can act as a broadcast firewall. The segments created by switches do not. Our algorithms also enable automatic detection of group-wide dependencies which must be factored when reconfiguring VLANs. We evaluate our algorithms on longitudinal snapshots of configuration files of a large-scale operational campus group obtained over a two year period. Our results show that our algorithms can produce significantly better plans than current practice, while avoiding errors and minimizing human work. Our unique data-sets also enable us to characterize VLAN related change activities in real groups, an important contribution in its own right.
Full Paper

IJCST/33/2/
A-937
   86 Halftoning Visual Cryptography Using Secret Sharing

Kiran Kumar Kinthada, T. Rajendra Prasad

Abstract

In this paper a new visual cryptography scheme is proposed for hiding information in images which divide secret images into multiple shares. In order to provide more security to existing schemes a new Technique called Digital Halftoning with error diffusion is used, to improve the quality and size of images. Secret information can be retrieved by stacking any k number of decrypted shares. Which reduces the color sets that renders the halftone image and chooses the color whose brightness variation is minimal.
Full Paper

IJCST/33/2/
A-938
   87 Efficiently Detecting and Solving the Wireless Network Attacks-Jammers

Nandamala Suneel Chowdary, Raju Gumpula

Abstract

Organizations’ today are increasing their dependence on wireless networks in order to operate and maintain a cost effective and competitive advantage. Wireless networks offer organizations mobility, allowing their users to physically move about whilst maintaining a connection to the organization’s wireless network. There is also a cost saving when compared with the traditional installation of a wired network. However, organizations need to control and prevent their network and systems from being exposed to wireless attacks. Wireless networks can be very vulnerable to DoS attacks and the results can be anything from degradation of the wireless network to a complete loss of availability of the wireless network within the organization. In this paper we record some of the most significant attacks that can be launched by a jammer, as well as give reference to the most well known works accomplished by security experts in detecting and preventing such scenarios. for every anti jamming system there should be an efficient detection mechanism accompanying it. The detection can be done in two levels. The first subsystem can operate at the PHY and use the consistency checks that we saw that work very well [5-6]. Then there can be a second subsystem that will detect possible malicious activity at the MAC layer detection of intelligent jammers. Additionally we can enhance the security of our network by implementing some of the techniques mentioned concerning the insertion of unpredictability at the size and timing of essential control packets. Moreover we can make some simple modifications at the MAC protocol.. Finally we can integrate to the system techniques in order to avoid jamming, like mobility, channel surfing or spread spectrum techniques.
Full Paper

IJCST/33/2/
A-939
   88 This Paper is Removed due to Technical Issue IJCST/33/2/
A-940
   89 Turbo Rules Crack FEC & Using BP Algorithm in LTE and WiMAX Methods

D. Uma Devi, G. Pavani

Abstract

Wireless message methods such as IS- 54, improved data rates for the GSM evolution (EDGE), universal interoperability for microwave access (WiMAX) and Long Term Evolution (LTE) have adopted low-density parity-check (LDPC), tail-biting convolution, and turbo rules as the forward error correcting rules (FEC) scheme for data and overhead channels. Therefore, many efficient algorithms have been proposed for decoding these rules. However, the different decoding approaches for these two families of rules usually lead to different hardware architectures. Since these rules work side by side in these new wireless methods, it is a good idea to introduce a universal de rules to handle these two families of rules. The present work exploits the parity-check matrix (H) representation of tailbiting convolution and turbo rules, thus enabling decoding via a unified belief propagation (BP) algorithm. Indeed, the BP algorithm provides a highly effective general methodology for devising low-complexity iterative decoding algorithms for all convolution rules classes as well as turbo rules. While a small performance loss is observed when decoding turbo rules with BP instead of MAP, this is offset by the lower complexity of the BP algorithm and the inherent advantage of a unified decoding architecture.
Full Paper

IJCST/33/2/
A-941