IJCST Logo

 

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY (IJCST)-VOL IV ISSUE III, VER. 3, JULY. TO SEPT, 2013


International Journal of Computer Science and Technology Vol. 4 Issue 3, Ver. 3
  S.No. Research Topic Paper ID
    72 Analytical Description of Operational Transconductance Amplifier

Arvind Singh Rawat, Sudhir Jugran, Krishna Chandra Mishra

Abstract

In this paper analytical description of operational transconductance amplifiers (OTA) is done. The basic properties of the Operational Transconductance Amplifier (OTA) are discussed. In this paper voltage-controlled amplifier, filters, and the impedances are presented. Controllable voltage gain and first-order and secondorder active filter with controllable edge design and undertook significant frequencies is performed. A versatile family of voltage controlled filter Systematic design requirements of the appropriate sections described. The total number of components used circuit is small, and the design equations and voltage control attractive features. In this paper the limitations and practical OTA-based filters by using commercial considerations available bipolar OTAs are discussed. OTAs of events in continuous-time monolithic
filters are considered.
Full Paper

IJCST/43/3/C-1664
    73 Robot Control System Based Artificial Neural Network

S.Senthil Kumar, Dr. K.Ananda Kumar

Abstract

This technical paper serves to provide a basic study into the field of Neural Networks. We will discuss the various architectures used in neural networks and provide an in-depth discussion into the Back Propagation Algorithm. Lastly, we will apply the back propagation algorithm to an autonomous control system to serve as an automatic guidance system. Neural networks can thus be applied in the control of motion of robot and serve as its automatic guidance system.
Full Paper

IJCST/43/3/C-1665
    74 Reliable Security in Cloud Computing Environment

A.Madhuri, T.V.Nagaraju

Abstract

Cloud computing is the newest term for the ongoing-dreamed vision of computing as a utility. The cloud provides convenient, on-demand network access to a centralized pool of configurable computing resources that can be rapidly deployed with great efficiency and minimal management overhead. The industry leaders and customers have wide-ranging expectations for cloud computing in which security concerns remain a major aspect Dealing with “single cloud” providers is becoming less popular with customers due to potential problems such as service availability failure and the possibility that there are malicious insiders in the single cloud. In recent years, there has been a move towards “multi clouds”, “inter cloud” or “cloud-of-clouds”. The proposed design allows users to audit the cloud storage with very light weight communication and computation cost. Our scheme achieves the storage correctness insurance as well as data error localization: whenever data corruption has been detected during the storage correctness verification, our scheme can almost guarantee the simultaneous localization of data errors, i.e., the identification of the misbehaving server(s).
Full Paper

IJCST/43/3/C-1666
    75 Review: Advanced Persistent Attacks (APT) the Rising Concern in the ERA of Big Data

Pankaj Kumar, Shilpa Batra, Animesh Kumar Rai, Abhishek Kumar

Abstract

The term advanced persistent threat, orAPT, was first used by the U.S. Air Force backin 2006 to describe complex (advanced) cyberattacks against specific targets over long periodsof time (persistent). APTs first really hitthe headlines in 2010 when a worm calledStuxnet was found to be infecting supervisory control and data acquisition managementsystems produced by Siemens. Subsequentinvestigation revealed a cyber weapondesigned to shut down Iran’s nuclear programby tampering with programmable logic controllersused in its nuclear fuel processingplant. The sheer audacity and sophisticationof this attack created hysteria among securityprofessionals and network administrators, andhas led to a great deal of confusion aboutwhat APTs are and what they can do.Research into Stuxnet and the appearance ofDuqu and then Flame in 2012 have kept APTsin the spotlight.This is because of the complexity of attacks and the penetration of the attackers.Although our knowledge about APT is widening but side by side the attacks are themselves growing on a fast pace. Criminals using APTs want data, so that they could steal highly valuable information from an organisation;therefore the vulnerability of data increases and probability of being attacked also hikes.. Government agencies andorganizations in industries such as finance, energy, IT, aerospace, and chemical and pharmaceuticalsare the mostly likely to be the victims of APT infections, as are those involved ininternational trade. Users and organizations with access through business relationships tovaluable data, such as smaller defence contractors, are also beginning to be targeted. Andthe use of watering hole attacks may be heralding a change in tactic to mass infections,which are then sifted for any potentially interesting targets. Criminals are less likely to targetorganizations running critical infrastructure, but attempted APT-type attacks by hactivists and nation-states are on the increase. Any organization running industrial control systemslinked to the Internet is at risk. Administrators of some systems may be unaware that theirsystems are connected to the Internet, while systems installed some years ago, when cyber securitywas less of an issue, may not be adequately protected from attack. To protect your organization against APTs, it’s important to know what an APT is andwhat it isn’t. In this survey paper, we examine the history of the attacks in the contextof what’s happening today, analyze the ways in which the attacks are perpetrated, and provide recommendations for knowing when such an attack is animminent threat for your organization
Full Paper

IJCST/43/3/C-1667
    76 An Efficient Scheme for Message Encryption Based on Public Key Crypto System

Srihari Varma Mantena, Sasi Kumar Bunga

Abstract

Security is a problem that is more sensitive during a message transmission attributable to the open nature and lack of morality in humans. In order to produce security, information-hiding techniques are proposed. To confuse the attacker or offender of a network by obfuscating message content could be a technique for information hiding. It involves concealing the key text inside the cheating text. If the cheating text is intercepted with security algorithms, the key text should still be undiscovered. In this paper, a new message encryption scheme is proposed using cheating text. The sender embeds an evident message in another plain text called Cheating Text. The positions of the characters of the plain text in the Cheating Text are stored in an Index File (IF). This file is encrypted using N-th degree truncated polynomial ring (NTRU) schema and sent beside with the Cheating Text. The receiver once received proceeds to decrypt the IF table and get back the original message from the received cheating text. While Decryption, Authentication is achieved by hashing the plaintext at the sender’s side using a Modified Message Digest algorithm and verified at the receivers end.
Full Paper

IJCST/43/3/C-1668
    77 Information Retrieval (IR) and Resolving Ambiguity in XML Using Fuzzy Search

Sandeep Gour, Brajesh Patel

Abstract

In this paper, for the first time we formalize and solve the problem of effective fuzzy keyword search over XML data. We study fuzzy keyword search in XML data, a new information access prototype in which the system searches XML data on the wing as the user types in query keywords. Fuzzy keyword search greatly enhances system usability by returning the matching files when users’ searching inputs exactly match the predefined keywords or the closest possible matching files based on keyword similarity semantics, when exact match fails. This is due to the size of XML because of its capability of representing data and the usage of complex query languages for querying XML. Thus the size of the XML database has to be reduced and also a user friendly approach should be provided to the users to make them easily query the database without having to know about XML structure and the syntax of the query language. Hence in this paper XML database and the approach of Keyword Search is proposed for this structure that helps the users query the database easily.
Full Paper

IJCST/43/3/C-1669
    78 Environment for Distributed Software Development

Aaditya Vishwakarma, Mahendra Rai

Abstract

Distributed Software Development (DSD) has recently became an active research area it is popular in software industry. Distributed software development suffers from communication, coordination and collaboration are key problems. In order to cope up with these problems following features will be implemented and integrated in single environment. In this paper we propose complete environment for distributed software development. Multiple developers can utilize this environment to work on same research but from geographically distributed locations. Lack of proper communication and coordination among team members make the various activities in distributed software development difficult, for example requirement management, testing, project planning, tracking and oversight.
Full Paper

IJCST/43/3/C-1670
    79 Comprehensive Analysis of DOS Attack Under Active Queue Management Over Manets

Rajesh Kumar, Shekhar Saini

Abstract

Each network device have to store and forward network packets if space is available in the device buffer otherwise packet is discarded. The default queuing technique used in MANET is Drail Tail Queue. The efficient use of buffer space with active queue management increases the MANET performance. This paper focuses on the effect of Denial of service attack on MANET and then Active queue management techniques are used to increase the performance of MANET under attack. The scenario used is two clusters having ten nodes each and a gateway used for communication between nodes. The DoS Attack used is TCP Syn attack which is performed by one of the node in the scenario. Under the default Drail Tail Queue, the performance of the network under attack is measured using DSDV and OLSR protocols. By applying the active queue management technique which is strategic RED, the performance of the network under attack is increased by 20%. DSDV protocol is giving better result under attack with and without AQM technique ARED.
Full Paper

IJCST/43/3/C-1671
    80 Image Deblurring Using DCT and PCA Based Fusion Technique With Bilateral Filter

Veni Maheshwari, Seema Baghla

Abstract

The image fusion is becoming one of the hottest techniques in vision applications. Many image fusion methods have been developed so far and provide good results. But is is found that developed methods are if accurate then they are time consuming or if they are slow then provide poor results. The main objective of image fusion is to combine information from multiple images of the same scene in order to deliver only the useful information. The Discrete Cosine Transforms (DCT) based methods of image fusion are more suitable and time-saving in real-time systems using DCT based standards of still image. DCT based image fusion produced results but with lesser clarity, less PSNR value and more Mean square error. Therefore the overall objective is to improve the results by integrating the DCT based fusion with PCA based fusion and also bilateral filter is used to remove the remaining anomalies.
Full Paper

IJCST/43/3/C-1672
    81 Pattern Association in Hopfield Neural Network With MC-Adaptation Rule and Genetic Algorithm

Manisha Uprety, Somesh Kumar

Abstract

This paper describes the performance of Hopfield neural networks by usingevolutionary algorithm and MC-adaptation learning rule. The aim is to obtain the optimal weight matrices with the MCadaptation rule and Genetic algorithm for efficient recalling of any approximate input patterns. The experiments consider the Hopfield neural networks architectures that store all objects using Monte Carlo-adaptation rule and simulates the recalling of these stored patterns on presentation of prototype input patterns using evolutionary algorithm (Genetic Algorithm).Experiment shows the recalling of patterns using genetic algorithm have better results than the conventional recalling with Hebbian rule.
Full Paper

IJCST/43/3/C-1673
    82 An Improved RFI Against Soft Errors By Using SIT

Laxminarayana.G, Zulekha Tabassum

Abstract

Continuous shrinking in feature size, increasing power density etc, increase the vulnerability of microprocessors against soft errors even in terrestrial applications. The register file is one of the essential architectural components where soft errors can be very mischievous because errors may rapidly spread from there throughout the whole system. Thus, register files are recognized as one of the major concerns when it comes to reliability. The paper deals with the difficulty to exploit this obvious observation to enhance the register file integrity against soft errors. We show that our technique can reduce the vulnerability of the register file considerably while exhibiting smaller overhead in terms of area and power consumption compared to state-of-the-art in register file protection. For embedded systems under stringent cost constraints, where area, performance, power and reliability cannot be simply compromised, we propose a soft error mitigation technique for register files.
Full Paper

IJCST/43/3/C-1674
    83 Detecting Data Outflows in Public Network Using Signatures

Ch.Satya Keerthi.N.V.L, Syed Gulam Gouse

Abstract

Data leakage is defined as the accidental or unintentional distribution of private or sensitive data to unauthorized entity. It is a silent type of threat. For example, an employee can intentionally or accidentally leak certain sensitive information. Sensitive data of companies and organizations includes Intellectual Property (IP), financial information, patient information, personal creditcard data, and other information depending on the business and the industry. Hence which sensitive data that has already been leaked from the enterprise and is publicly available, for example, on the Internet should be detected. This strategy is post-facto leakage detection. Traditionally, this leakage detection is handled by watermarking, in which a unique code is embedded in each distributed copy. By introducing a technique beyond watermarking, we can facilitate this post-facto detection technique, in which a unique embedded signature will be identified from within the contents of the original document containing the sensitive data. In this paper, we present an automated tamper-proof low complexity algorithm to solve data leakages. We extract embedded signatures from sensitive documents and use them in conjunction with search engines to determine whether near-duplicate versions of the document (or portions of it) are available on the Web [3]. The embedded signature is tamper-proof; even if an adversary partially modifies a document, our mechanism can detect duplicate copies. Also, if a duplicate copy is present in the Web, our system can detect such a copy with a small number of queries.
Full Paper

IJCST/43/3/C-1675
    84 A Novel Approach for Image Search from Large Scale Image Collection

B.Raju, Sayyed Nagul Meera

Abstract

Now a day’s use of internet is increased with a big extent due to this large data is uploaded and downloaded by various internet users and a quite large-scale of data is processed. Users can create, share and comment Medias using social sharing websites like Flickr and YouTube so once we have such a huge data to handle on the internet we cannot find the right information which we are searching. The data generated by user called metadata provides multiple things like sharing, organizing multimedia contents and provides useful information related to multimedia management. Due to this reason we are proposing a “Personalized image search Engine by using ontological search and hybrid reranking Algorithms”. By using this algorithm we can make our search personalized with a specific limit. This system is advantageous over Google because Google gives non-personalized information, which is not as per user desirable requirements. The basic premise of our system is to combine user preferences and query related search into user specific topics. This proposed system consists of two parts, first part contains hybrid re-ranking algorithm based on ontology about images as well as text and second part contains ontological user profiles for personalized image search.
Full Paper

IJCST/43/3/C-1676
    85 Protected and Scalable Third Party Auditing in Cloud Computing

Siva Koteswararao D, Nagul Meera.Sayyed

Abstract

Cloud computing is the newest term for the ong-dreamed vision of computing as a utility. The cloud provides convenient, on-demand network access to a centralized pool of configurable computing resources that can be rapidly deployed with great efficiency and minimal management overhead. The industry leaders and customers have wide-ranging expectations for cloud computing in which security concerns remain a major aspect. Actually the application software and databases to the centralized large data centers in a cloud .The management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud is indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously.
Full Paper

IJCST/43/3/C-1677
    86 Efficiently Protect the Node Capture Attacks in Wireless Sensor Network

Ravi Kumar. Kommuluri, Gnana Vardhan. Madasu

Abstract

Security has become a challenge in wireless sensor networks. Low capabilities of devices, in terms of computational power and energy consumption, make difficult using traditional security protocols. Two main problems related to security protocols arise. Firstly, the overload that security protocols introduce in messages should be reduced at a minimum; every bit the sensor sends consumes energy and, consequently, reduces the life of the device. Secondly, low computational power implies that special cryptographic algorithms that require less powerful processors need to be used. The combination of both problems leads us to a situation where new approaches or solutions to security protocols need to be considered. These new approaches take into account basically two main goals: reduce the overhead that protocol imposes to messages, and provide reasonable protection while limiting use of resources. The Proposed scheme builds on the Secure Hierarchical In-Network Aggregation [2], in order to achieve not only secure but also efficient WSN data collection over a series of aggregations. We have described a basic version of our scheme, sufficient to satisfy the stated specification.
Full Paper

IJCST/43/3/C-1678
    87 Efficiently Protect the Node Capture Attacks in Wireless Sensor Network

Koti Veera Nagayya Ande, G.Padmarao

Abstract

Cloud computing has gained a lot of hype in the current world of I.T. Cloud computing is said to be the next big thing in the computer world after the internet. Cloud computing is the use of the Internet for the tasks performed on the computer and it is visualized as the next-generation architecture of IT Enterprise. The ‘Cloud’ represents the internet. Cloud computing is related to several technologies and the convergence of various technologies has emerged to be called cloud computing. In comparison to conventional ways Cloud Computing moves application software and databases to the large data centers, where the data and services will not be fully trustworthy. In this article, I focus on secure data storage in cloud; it is an important aspect of Quality of Service.To ensure the correctness of users’ data in the cloud, I propose an effectual and adaptable scheme with salient qualities. This scheme achieves the data storage correctness; allow the authenticated user to access the data and data error localization, i.e., the identification of misbehaving servers.
Full Paper

IJCST/43/3/C-1679
    88 Architecture of a Parallel Focused Crawler for Online Social Networks

Priyank Sirohi, Abhinav Goel, Niraj Singhal

Abstract

Social network sites are attracting the web users more and more every day to share their views. The contents in the online social networking sites differ from traditional web content in many ways. One of the most important differences is highly temporal nature of the content. There is a great challenge to retrieve information from these sites. This paper presents architecture of focused crawler for social networking sites. The crawler works in parallel for every profile independently of each other. Currently it is implemented for a social network site Google+. The system also avoids redundant crawling.
Full Paper

IJCST/43/3/C-1680
    89 Machine AIDED Identification of Risk Factors of Cervical Cancer

D Sowjanya Latha, PV Lakshmi, Sameen Fathima

Abstract

Cervical cancer is one of the most prominent diseases among women worldwide. Early detection or predictions of the clinical and demographic factors that influence the onset of the disease are important from the perspective of public health management. Earlier studies have identified Human Papilloma Virus (HPV) as the only causative organism for the onset of the disease but later studies have identified several other factors that may influence the onset of the disease. There are several tests like pap smear, colposcopy, biopsy used in diagnosing the disease. These tests do not always generate accurate results, false positives always exist. Cost involved for these tests is also high. In this perspective we are trying to develop machine aided algorithms for identification of risk factors among individuals who are likely to get the disease. In our study we have analyzed the demographic data of the prospective candidates of cervical cancer using feature selection methods and C4.5 classification algorithm for generating decision rules. We have obtained an accuracy of 100 percent for the classification algorithms considered for comparison. Identified having multiple partners (MP) and husband’s extra marital affairs (Hefa) as the other factors apart from HPV that influence the onset of the disease. Our study shows that the procedure can be effectively utilized for subsequent laboratory procedures to further diagnose the severity of the disease.
Full Paper

IJCST/43/3/C-1681
    90 Web Graphs For Analyzing Users Interest Based on Social Networking

Madhavi Aruri, Bandi Krishna, V.B.Narasimha, Ranjeeth Kumar M, Parveez

Abstract

In recent years, people increasingly perceive the web as a social medium that fosters interaction among people, sharing of experiences and knowledge, group activities, community formation and evolution. Many popular online networks such as Twitter, Facebook, LinkedIn and many more have become increasingly popular. Such social networks typically contain a tremendous amount of content and linkage data which can be leveraged for analysis. Now-a-days marketers and business groups are tremendously using social networking sites for their business purpose. In this paper we propose recommendation for users based on generated graphs which are formed by analyzing the things that user like, share among friends, products they purchase, ratings and reviews they leave online, items they search for online and more. With the analysis of social networks marketers can gain more audiences with particular interests. The proposed algorithm is capable of gathering and analyzing the user’s interest. The generated graphs will help both customers and marketers to fulfill their needs.
Full Paper

IJCST/43/3/C-1682
    91 PIS:A Novel Image Search Framework for Photo Sharing WebSites

V.S.Sumantha Bomidi, Phani Ratna Sri Redipalli

Abstract

Rapid development of various social sites make the user to share the multimedia content .The metadata generated by user not only help the large scale society to share and organize this content but also provide the use full information to improve the retrieval management. This paper explains the social need of multimedia and purpose a method to personalize the image search considering the user and query relevance. This method used a ranking based multicorrelation Tensor Factorization method to perform annotationprediction and again we introduce User specific topic modeling to map the query relevance and user performance.
Full Paper

IJCST/43/3/C-1683
    92 SPRT:Efficient Methodology for Detecting Spamming in Networks

S. Chandrasekhar, Sri. K V Satyanarayana Murthy

Abstract

Machines used by crackers are the main security threats on the web. They launch various attacks to spread the malware and DDoS. The system used by the crackers called as compromised system and are involved in the spamming activities, commonly known as spam zombies. In this paper, we discuss a system called SPOT which is used for spam zombie detection considering the outgoing messages of a network. SPOT is designed using Sequential Probability Ratio Test, which is a statistical tool based on false positive and false negative error rates. FireCol is act as intrusion prevention systems (IPSs) located at the Internet service providers. The IPSs creates virtual protection rings to protect selected traffic information. We also concentrated on the performance of SPOT system using e-mail tracing method. Our experiment shows that SPOT is an effective and efficient system in detecting spam zombie in the network. We also compare the performance of SPOT with other method and concluded that SPOT is more efficient than the existing one.
Full Paper

IJCST/43/3/C-1684
    93 Multiple Efficient Techniques for Preventing Jammer Attacks in Wireless Networks

Sathish Kammasathi, Sri. K V Ramana

Abstract

This paper, explains the problem of selective jamming attacks in wireless networks. Tthe adversary is active only for a short period of time in this particular attack, it selectively targets the messages of high priority with huge information. We described and given many examples regarding the advantages of selective jamming and network performance degradation. We have presented two case studies; a selective attack on TCP and one on routing. We demonstrate that selective jamming attacks can be implement by performing real-time packet classification at the physical layer. To find and resolve this attacks, we introduce three schemes which prevent real-time packet classification. We show that selective jamming attacks can be launched by performing real-time packet classification at the physical layerWe have analyzed the security of this method and compared with the existing methods.
Full Paper

IJCST/43/3/C-1685
    94 A Novel Packet Buffer Architecture for Router

V.Jaya Sree, B.Durga Sri

Abstract

High-speed routers use a packet buffer which provides services like multiple queues, large capacity and fast answers. SRAM/ DRAM hierarchical buffer architectures could meet the problems as suggested by some researchers. These architectures have problem of either large SRAM requirement or high timecomplexity in the memory management. Salable, efficient, and novel distributed packet buffer architecture has been proposed in this paper. The basic issues that need to be finalized to make architecture workable are:1) minimization of overhead of an individual packet buffer: and 2) design of scalable packet buffers using independent buffer subsystems. We solve these problems by developing a good compact buffer which reduces SRAM buffer which reduces SRAM size requirement by (k-1)/k. Further, we design a reasonable method of Coordinating multiple subsystems with a load-balancing algorithm which maximizes overall system performance. Both theoretical analysis and experimental results demonstrate that our load-balancing algorithm and the distributed packet buffer architecture can easily scale to meet the buffering needs of high bandwidth links and satisfy the requirements of scale and support for multiple queues.
Full Paper

IJCST/43/3/C-1686
    95 Attack Path Reconstruction for IP Trace Back

U.Glory, CH.Pavani

Abstract

In these days the Internet has became popular and applied in various fields which gives important in security issues and caught the people’s attention a lot. However, the hackers hide their own IP address by pretending to be original and launch attack. Due to this reason, researchers are concentrating on the schemes to trace the source of these attacks. his paper, propose s a new hybrid IP trace back system with efficient packet logging to have a fixed storage requirement for each router without refreshing the logged tracking information and to reconstruct the attack path. It used 16bit marking field which avoids packet fragmentation problem. We simulate and analyze our scheme by many experiment and compare the result with many existing scheme developed by many researchers in different aspects like storage requirement and accuracy.
Full Paper

IJCST/43/3/C-1687
    96 Preventing Selective Jamming Attacks by Real-Time Packet Classification

P Rao Vundamatla, R.Phani Rantna Sri

Abstract

This paper, explains the problem of selective jamming attacks in wireless networks. Tthe adversary is active only for a short period of time in this particular attack, it selectively targets the messages of high priority with huge information. We described and given many examples regarding the advantages of selective jamming and network performance degradation. We have presented two case studies; a selective attack on TCP and one on routing. We demonstrate that selective jamming attacks can be implement by performing real-time packet classification at the physical layer. mitigating selective forwarding attack or gray hole attack . To find and resolve this attacks, we introduce three schemes which prevent real-time packet classification . We show that selective jamming attacks can be launched by performing real-time packet classification at the physical layerWe have analyzed the security of this method and compared with the existing methods.
Full Paper

IJCST/43/3/C-1688
    97 Efficient Search Technique for Sensitive Metric Data in Cloud

Kopparthi.Harika, M.Narasimha Raju

Abstract

This paper discussed a new cloud computing method where similarity querying of metric data is outsourced to a service provider. The data is to be revealed only to trusted users, not to the service provider or anyone else. Users request for a query to the server for most similar data .Due to the high sensitivity attitude of data privacy may be an important issue. ranked search which enhances the system by enabling search result relevance ranking and ensure the secure retrieval. It provides a secure searchable index and develop a one –to-many-order preserving mapping technique to protect this sensitive information. This paper presents techniques that transform the data before supplying to the service provider. This technique provides a good tradeoff between query cost and accuracy furthermore it gives more security as the existing one.
Full Paper

IJCST/43/3/C-1689
    98 Online Password Guessing Attacks Detection and Resistance Protocol

P. Mani kantha, V. Sangeeetha Viswanadham

Abstract

Attack on password in remote login services is increasing day by day. Providing convenient login for legitimate users from prevention such attack is became a challenging problem. From the beginning Automated Turing Tests (ATTs) is continuing to be a successful method to identify malicious login attempts. In this paper, we discuss the in ability of existing approach and proposed a new Password Guessing Resistant Protocol ( PGRP), to restrict the attack. This PGRP method limits the total number of login attempts from unknown user. We analyze the performance of PGRP with two real-world data sets and find it more promising than existing proposals.
Full Paper

IJCST/43/3/C-1690
    99 Full-Text Retrieval in DHT-Based P2P Networks Using Potency Bloom Filters

Shaik Juber, Endela Rameshreddy

Abstract

Peer-to-Peer Full-text retrieval requires distributed intersection/union operations across wide area networks, raising a large amount of traffic cost. Existing schemes commonly utilize Bloom Filters (BFs) encoding to effectively reduce the traffic cost during the intersection/union operations. In this paper, we address the problem of Potency the settings of a BF. We show, through mathematical proof, that the Potency setting of BF in terms of traffic cost is determined by the statistical information of the involved inverted lists, not the minimized false positive rate as claimed by previous studies. Through numerical analysis, we demonstrate how to obtain Potency settings. To better evaluate the performance of this design, we conduct comprehensive simulations on TREC WT10G test collection and query logs of a major commercial web search engine. Results show that our design significantly reduces the search traffic and latency of the existing approaches.
Full Paper

IJCST/43/3/C-1691
    100 Risk-Aware Response Mechanism D-S Evidence Theory

N.Ashok, K.Vasanth Kumar, P.Suresh Babu

Abstract

Mobile Ad hoc Networks (MANET) are having dynamic nature of its network infrastructure and it is vulnerable to all types of attacks. Among these attacks, the routing attacks getting more attention because its changing the whole topology itself and it causes more damage to MANET. It causes additional damages to the infrastructure of the network, and it leads to uncertainty in finding routing attacks in MANET. In this paper, we are proposing a adaptive risk-aware response mechanism with extended Dempster-Shafer theory in MANET to identify the routing attacks and malicious node. Our risk-aware approach is based on an extended Dempster-Shafer mathematical theory of evidence introducing a notion of importance factors. In addition, our experiments demonstrate the effectiveness of our approach with the consideration of several performance metrics.
Full Paper

IJCST/43/3/C-1692
    101 Trajectory-Based Sybil Attack Detection in Urban Vehicular Networks

Yallala Masthan Reddy, D.Madhu Babu

Abstract

In urban vehicular networks, where privacy, especially the location privacy of mysterious vehicles is highly concerned, mysterious verification of vehicles is indispensable. Consequently, an intruder who succeeds in false multiple hostile identifies can easily launch a Sybil attack, gaining a disproportionately large influence. In this paper, we propose a novel Sybil attack detection mechanism, Footprint, using the trajectories of vehicles for identification while still preserving their location privacy. More specifically, when a vehicle approaches a Road-Side Unit (RSU), it actively demands an authorized message from the RSU as the proof of the appearance time at this RSU. We design a location-hidden authorized message generation scheme for two objectives: first, RSU signatures on messages are signer ambiguous so that the RSU location information is concealed from the resulted authorized message; second, two authorized messages signed by the same RSU within the same given period of time (temporarily linkable) are recognizable so that they can be used for identification. With the temporal limitation on the linkability of two authorized messages, authorized messages used for long-term identification are prohibited. With this scheme, vehicles can generate a locationhidden trajectory for location-privacy-preserved identification by collecting a consecutive series of authorized messages. Utilizing social relationship among trajectories according to the similarity definition of two trajectories, Footprint can recognize and therefore dismiss “communities” of Sybil trajectories. Rigorous security analysis and extensive trace-driven simulations demonstrate the efficacy of Footprint.
Full Paper

IJCST/43/3/C-1693
    102 Minimizing Communication and Computation Cost and Improving Security in the Cloud

A.Krishna Mohan, Azatullah Dawoodzai

Abstract

Cloud computing is a model for enabling convenient, on demand network access to a shared pool configurable computing resources (e.g. networks, servers, storages, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction, and provides a huge data services to the user in an easy and comfortable way over the internet. A major feature of the cloud services is that the data shared by the users are processed remotely with unknown machines which are not concern or own by the user. The benefit of Cloud storage is that, the data can be stored without the burden of local hardware and software management. Even though it provides such benefits it inevitably poses new security risks toward the correctness of the data in cloud, such as missing or corruption of the data. In order to resolve this problem, and to ensure the data integrity and availability of the data, a flexible distributed storage integrity auditing mechanism, utilizing the homomorphic token, privacy preserving third party auditing and distributed erasure-coded data is proposed. This system works by analysing the communication and computation cost, and allows users to audit the cloud storage with very lightweight communication and computation cost. Thus, enabling auditability for cloud storage is of critical importance so that users can resort to the TPA (Third Party Auditor) to check the integrity of outsourced data and can be worry-free. The auditing result not only ensures strong cloud storage correctness guarantee, but also simultaneously achieves fast data error localization, the proposed design further supports secure and efficient dynamic operations on outsourced data, and our experiments shows a high efficient secure analysis to prevent the cloud server from different malicious attack.
Full Paper

IJCST/43/3/C-1694
    103 Infrastructure Maintenance With Supportive Communications Using MANETS

Sameer Shaik, P.Shakeel Ahamed

Abstract

Supportive communication has received tremendous interest for wireless networks. most existing works on supportive communications are focused on link-level physical layer issues. Consequently, the impacts of supportive communications on network-level upper layer issues, such as Infrastructure control, routing and network capacity, are largely ignored. A capacity-effective supportive (CES) Infrastructure control scheme to improve the network capacity in MANETs by jointly considering both upper layer network capacity and physical layer supportive communications. Through simulations, we show that physical layer supportive communications have considerable impacts on the network capacity, and the proposed Infrastructure control scheme can substantially improve the network capacity in MANETs with supportive communications.
Full Paper

IJCST/43/3/C-1695
    104 Similarity Measure Space using Spectral Clustering Method

M.Deepti, N.Tulasi Radha

Abstract

Clustering method in similarity measure space is a topic for new research graduates. In this paper a new spectral clustering method (CPI- Spectral Clustering Method has been introduced. Here the documents are expected into a low-dimensional semantic space in which the correlations between the documents in the local patches are maximized and minimized simultaneously. Correlation as a similarity measure is more suitable for detecting the intrinsic geometrical structure of the document as it embedded in the document. CPI can discover easily the intrinsic structure as compare to the Euclidean distance. We have demonstrated the result in rigorous experiment and shown that CPI method is better than the existing one.
Full Paper

IJCST/43/3/C-1696
    105 A Novel Approach for Detecting Intruders in Wireless Network

K.Sreelakshmi, M.Rajasekhar

Abstract

Wireless Sensor Networks (WSNs) offer an excellent opportunity to monitor environments, and have a lot of interesting applications in warfare. The problem is that security mechanisms used for wired networks do not transfer directly to sensor networks. Some of this is due to the fact that there is not a person controlling each of the nodes, and even more importantly, energy is a scarce resource. Batteries have a short lifetime and cannot be replaced on deployed sensor nodes. In this paper we will show the major threats that WSNs have to deal with. Additionally we will mention existing countermeasures, but we will focus on intrusion detection. We combine existing IDS approaches and show the steps to build an IDS for WSNs.
Full Paper

IJCST/43/3/C-1697
    106 A Novel Scheme for Reliable Multipath Routing Through Node-Independent Directed Acyclic Graphs

S.Sheela Tarangini, S.K.A.Manoj, P.Suresh Babu

Abstract

Resilient multipath routing achievement makes us to introduce the concept of Independent Directed Acyclic Graphs (IDAGs). we develop a new algorithm called polynomial- time algorithms which compute link-independent and node-independent DAGs. which provides multipath routing by utilizing all possible edges with link failure recovery and achieves a god speed in data transmission to overcome the overhead when routing is based on destination address and incoming edge. IDAGs approach shows a good result in performance wise, which can be compute by using by comparing key performance through extensive simulations.
Full Paper

IJCST/43/3/C-1698
    107 A Secure, Efficient and a Fully Non Interactive Admission Technique in MANET

M.Anusha, M.RajaSekhar

Abstract

A Mobile Ad-Hoc Network (MANET) is characterized by the lack of any infrastructure, absence of any kind of centralized administration, frequent mobility of nodes, network partitioning, and wireless connections. These properties make traditional wireless security solutions not straightforwardly applicable in MANETs. Admission control is an essential and fundamental security service in Mobile Ad-Hoc Networks (MANETs). Most previously proposed admission control protocols are prohibitively expensive and require a lot of interaction among MANET nodes in order to securely reach limited consensus regarding admission and cope with potentially powerful adversaries. While the expense may be justified for long-lived group settings, short-lived MANETs can benefit from much less expensive techniques without sacrificing any security. In this paper, we consider short-lived MANETs and present a secure, efficient and a fully non interactive admission control protocol for such networks. More specifically, our work is focused on novel applications of non-interactive secret sharing techniques based on bi-variate polynomials, but, unlike other results, the associated costs are very low.
Full Paper

IJCST/43/3/C-1699
    108 Effective Resolution of Firewall Policies

B.G.Ratna Kumari, N.Tulasi Radha, P. Suresh Babu

Abstract

Firewalls are a well known and advanced security mechanism which ensure the security of private networks in different businesses organization and institutions. The reliability of security provided by a firewall merely depends on the quality of policy configuration of the firewall. Due to the complex nature of firewall configurations as well as the lack of systematic analysis mechanisms and tools it is very difficult task to manage the anomalies in its policies. Detection and recovery of anomalies in firewall policies became a challenging job for the researchers. In this paper we have introduced a novel idea to detect and prevent the anomalies from firewall policies. We have adopted a rule-based segmentation technique to identify firewall anomalies policies and derive effective and secure resolutions. We introduced a grid-based representation technique, with an intuitive cognitive sense about policy anomaly. We also discuss the concept of implementation part of a visualization-based firewall policy analysis tool calledFirewall Anomaly Management Environment (FAME). In addition, we have shown an experimental result how efficiently our approach can discover and resolve anomalies in firewall policies.
Full Paper

IJCST/43/3/C-1700
    109 A Study of Collective Behavior in Heterogeneous Social Networks

R.Ramya, Panuganti.Ravi

Abstract

Collective behavior of an individual in social network given a good path to the researcher and study of this gather the information about an individual behavior in an effective way. Many social networks generate a huge amount of data which gives an opportunity to study the collective behavior. In this work, our aim is to predict collective behavior in social media. In spite of the heterogeneity of network and geographical area we can infer the behavior of an unobserved individual perfectly. To address the scalability issue, we propose an edge-centric clustering scheme to extract sparse social dimensions. Which approach can efficiently handle networks of different zone and give a comparatively good result.
Full Paper

IJCST/43/3/C-1701
    110 Scalable and Reliable Range Aggregates in Uncertain Position Based Queries

M.V.Ramana, G.Sunnydeol

Abstract

Uncertain data are inherent in many applications such as environmental surveillance and quantitative economics research. Recently, considerable research efforts have been put into the field of analyzing uncertain data. In this paper, we officially define the problem of uncertain location based range aggregates in a multidimensional space; it covers a wide spectrum of applications. To efficiently process such a query, we propose a general filtering-and verification framework and two filtering technique, named STF and PCR respectively, such that the expensive computation cost for verification can be significantly reduced. As demonstrated in the experiment, STF filtering technique can achieve a decent filtering capacity based on a few pre-computed statistics about the uncertain location based query. in addition, it is very fast and space efficient due to its simplicity. And PCR technique is quite efficient when more space is available
Full Paper

IJCST/43/3/C-1702
    111 Effectively Preventing Internal Threat Attacks in Wireless Sensor Networks

Ch.Padmavathi, VKesavkumar

Abstract

Understanding and defending against jamming attacks has long been a problem of interest in wireless communication and radar systems. In wireless adhoc and sensor networks using multi-hop communication, the effects of jamming at the physical layer resonate into the higher layer protocols, for example by increasing collisions and contention at the MAC layer, interfering with route discovery at the network layer, increasing latency and impacting rate control at the transport layer, and halting or freezing at the application layer. Adversaries that are aware of higher-layer functionality can leverage any available information to improve the impact or reduce the resource requirement for attack success. For example, jammers can synchronise their attacks with MAC protocol steps, focus attacks in specific geographic locations, or target packets from specific applications. In this work, we address the problem of selective jamming attacks in wireless networks. In these attacks, the adversary is active only for a short period of time, selectively targeting messages of high importance. We illustrate the advantages of selective jamming in terms of network performance degradation and adversary effort by presenting two case studies; a selective attack on TCP and one on routing. We show that selective jamming attacks can be launched by performing real-time packet classification at the physical layer. To mitigate these attacks, we develop three schemes that prevent real-time packet classification by combining cryptographic primitives with physical-layer attributes. We analyse the security of our methods by adding Public Key Encryption algorithms (e.g., RSA) and evaluate their computational and communication
overhead.
Full Paper

IJCST/43/3/C-1703