IJCST Logo

 

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY (IJCST)-VOL IV ISSUE I, VER. 4, JAN. TO MARCH, 2013


International Journal of Computer Science and Technology Vol. 4 Issue 1, Ver. 4
S.No. Research Topic Paper ID
111 Automatic Live Attack Detection Using Network Intrusion Detection System (NIDS) Based on Signature Search of Data Mining

Neha Upadhyay, Gopal Solanki

Abstract

Ever since the rising era of internet technology, network security has become one of the most important issues that needs focus today. There is an increasing public demand to develop systems that can guard against different attacks that are attempted by hackers. One security system which falls in to this category is the Intrusion Detection System (IDS). Our motivation in this research work is to focus on misuse detection; in these types of techniques generally attack signatures are collected and stored in a database. Therefore we put forward an algorithm that will utilize the known signatures, and use the same to detect related attacks quickly. Data mining techniques, consisting association rule mining is improving day by day. By applying these data mining techniques in datasets, datasets have to be collected from different sources and also to share between different parties without violating the security norms of datasets.
Full Paper

IJCST/41/4/A-1389
112 IGA Based Image Retrieval

Dr. P.K. Deshmukh, S. B. Javheri, R.A.Davalgaonkar

Abstract

In recent year’s large number of images is required to store and retrieve in emerging fields such as multimedia databases and digital image libraries. To perform the task of retrieval according to user’s evaluation many approaches are proposed. This paper describes CBIR (Content Based Image Retrieval) approach using IGA (Interactive Genetic Algorithm) to infer images in database which would be of most interest to the user. It uses visual features or low level features like color coherence vector, color histogram, color moments and texture features with Gabor filter features of image in addition with high level semantics such as object ontology to extract more refined results from larger databases. It combines user’s need of information and valuable characteristics of image to obtain superior results of retrieval in evolutionary generations of Interactive Genetic Algorithm.
Full Paper

IJCST/41/4/A-1390
113 Mining Punjabi Text Using Clustering and Classification Techniques

Salloni Singla, Shruti Aggarwal

Abstract

Text Mining is a field that extracts useful information from the text document according to users need which is not yet discovered. Text Classification is one of the text mining tasks to manage the information efficiently, by classifying the documents into classes using classification and clustering algorithms .Each text document is characterize by a set of features used in text classification method, where these features should be relevant to the task. This paper introduces preprocessing techniques, feature selection methods for classify Punjabi Text documents by clustering and classification algorithm.
Full Paper

IJCST/41/4/A-1391
114 Result Analysis of AODV and DSR with Different Node Mobility Using Wormhole Attack in Wireless Sensor Network

Varsha Sahni

Abstract

A Wireless Sensor Network (WSN) is a network consisting of devices using sensors to cooperatively monitor physical or environmental conditions such as temperature, vibration, pressure, motion or pollutants, at different locations.WSN is highly vulnerable to attacks with their low battery power, less memory, and associated low energy. Sensor nodes communicate among themselves via wireless links. In wireless sensor networks, security is one of the hottest research issues. In this paper we have evaluated the affects of wormhole attack in wireless sensor network on the performance of AODV and DSR protocols with varying node mobility. WSN’s protocol has different security flaws and using these flaws many kind of attack possible on wireless sensor network. Wormhole is one of these attacks In wormhole attack attacker place some malicious node in the network. A malicious node captures data packets from one location in the network and tunnels them to another malicious node at distinct location, which replays them locally. These tunnels works like shorter link in the network and so act as benefit to unsuspecting network nodes which by default seek shorter routes. This paper illustrates how wormhole attack affects performance of routing protocol in wireless sensor network using random waypoint mobility model with varying node mobility.
Full Paper

IJCST/41/4/A-1392
115 Entrenched Visual Cryptography using Secret Sharing Scheme

Kode Phani Kumar, B.Veera Mallu

Abstract

Visual Cryptography Scheme (VCS) is a cryptographic technique which allows visual information i.e.: printed text, handwritten notes, and picture to be encrypted in such a way that the decryption can be performed by the human visual system, without the assist of computers.VCS is a kind of secret sharing scheme that focuses on sharing secret images. The idea is to convert the written material into a cipher text then place that cipher text in to an image and encode this image into n shadow images. The decoding only requires only selecting some subset of these n images, making transparencies of them, and stacking them on top of each other.
Full Paper

IJCST/41/4/A-1393
116 Enhancing Data Reliability Using Secure Digital Signing

Y. Sushma Sree, G.Srinivarao, Y. Laxmi Prasanna

Abstract
The modular design of ADS makes it application-transparent (i.e., no need to modify the application source code in order to deploy it) and almost hypervisor-independent (i.e., it can be implemented with any Type I hypervisor). To demonstrate the feasibility of ADS, we report the implementation and analysis of an Xen-based ADS system.Digital signatures are an important mechanism for ensuring data trustworthiness via source authenticity, integrity, and source nonrepudiation. However, their trustworthiness guarantee can be subverted in the real world by sophisticated attacks, which can obtain cryptographically legitimate digital signatures without actually compromising the private signing key. In this paper, we propose a novel solution, dubbed Assured Digital Signing (ADS), to enhancing the data trustworthiness vouched by digital signatures. In order to minimize the modifications to the Trusted Computing Base (TCB), ADS simultaneously takes advantage of trusted computing and virtualization technologies. Specifically, ADS allows a signature verifier to examine not only a signature’s cryptographic validity but also its system security validity that the private signing key and the signing function are secure, despite the powerful attack that the signing application program and the general-purpose Operating System (OS) kernel are malicious. This problem cannot be adequately addressed by a purely cryptographic approach, by the revocation mechanism of Public Key Infrastructure (PKI) because it may take a long time to detect the compromise, or by using tamper-resistant hardware because the attacker does not need to compromise the hardware. This problem will become increasingly more important and evident because of stealthy malware (or Advanced Persistent Threats.
Full Paper

IJCST/41/4/A-1394
117 Guarding Distributed Accountability for Data Fragmenting in the Cloud

Nazeema Begum Mohammed, Shaik Nagul

Abstract
Cloud computing has the great prospective to dramatically change the scenery of the current IT Industry. In cloud atmosphere the client information are generally processed remotely in unfamiliar machines that users do not hold or control. Data entire management can be done through the cloud service providers. Cloud computing provides on demand services. Multiple users want to do business of their data using cloud but they get fear to losing their data. While data owner will store his/her data on cloud, he must get confirmation that his/her data is safe on cloud. Remote storage of user’s data in cloud opens up new challenges such as lack of control over data and security. In this paper, we provide effective mechanism to track usage of data using accountability. Accountability is used to keep track of the genuine usage of the user’s information in the cloud. The proposed system combines the access control, usage control and authentication polices. The data are sending along with access control policies and logging policies enclosed in JAR files, to cloud service providers. Proposed system provides automatic logging mechanisms using JAR programming which improves security and privacy of data in cloud. By means of the accountability, data owners can track not only whether or not the service-level agreements are being honored, but also enforce access and usage control rules as needed. Security is controlled by the security policy that’s in force at runtime. Cloud provider configures the policy to grant security privileges to JAR clients.
Full Paper

IJCST/41/4/A-1395
118 Hierarchical Viewpoint Based Clustering

S. Susmitha, A. Isabella

Abstract
Clustering is an important technique in the data mining. The main goal of the clustering is to find the similarity between the data points and grouping them the data into a single groups or sub groups in clustering process. In this paper investigate k-means algorithm, to implement the document clustering with Multiviewpoint based similarity measure. Similarity between a pair of data points can be defined either explicit or implicit to find the optimal solution for clustering processes. To resolve this problem, proposed system which is developing a novel hierarchical algorithm for document clustering which produces superlative efficiency and performance which is mainly focuses on making use of cluster overlapping phenomenon to design cluster merging criteria. Hierarchical Agglomerative clustering establishes through the positions as individual clusters and, by the side of every step, combines the mainly similar or neighboring pair of clusters. This needs a definition of cluster similarity or distance. The hierarchical is like building a tree-based hierarchical taxonomy from a set of documents.
Full Paper

IJCST/41/4/A-1396
119 MIPS Processor With Reduced Dynamic Power

Harpreet Kaur, Jaspal Singh

Abstract
Five stage pipelined MIPS processor architecture with reduced dynamic power and improved clock cycles per instruction and million instructions per second is proposed in this paper. To eliminate hazards which are introduced in the pipelined processors, NOP instruction is added. NOP instructions do not contribute to any useful work, so the power consumed during NOP instruction is wasted. In the proposed architecture dual write port register file is used to support dual write-back operation, which reduces the number of NOP instruction in the pipeline and further reduces the dynamic power. The processor architecture is described using verilog and synthesized using Xilinx ISE 14.1.
Full Paper

IJCST/41/4/A-1397
120 A Multifactor Security Protocol for Wireless Payment Secure Web Authentication using Biometric Characteristics

Pawandeep Singh, Harneet Arora

Abstract
This authentication technique uses a best approach for secure web transaction. It uses a Biometric property of user for authentication and SMS (Short Message Service) to enforce an extra security level along with the traditional Login/password system. Biometric properties are needed when a user wants a transaction then the user gives their fingerprint information. In this technique uses a encryption/decryption method. It is a very complicated algorithm. This method keeps the biometric properties as a secret code. A user creates the biometric properties on their Mobile device with the help of fingerprint scanners. Then the pre-installed application creates an image of the fingerprint and encrypt with the help of public key cryptography. This technique is not a one time password technique, it can be used as more as user’s want. This code is used to initiate secure web transaction using cell phones. Finally we extend the system for two way authentication which authenticates both parties (user and e- service provider).

Full Paper

IJCST/41/4/A-1398
121 Optimization of Image Compression for Scanned Document’s Storage and Transfer

Madhu Ronda S

Abstract
Today highly efficient file storage rests on quality controlled scanner image data technology. Equivalently necessary and faster file transfers depend on greater lossy but higher compression technology. A turn around in time is necessitated by reconsidering parameters of data storage and transfer regarding both quality and quantity of corresponding image data used. This situation is actuated by improvised and patented advanced image compression algorithms that have potential to regulate commercial market. At the same time free and rapid distribution of widely available image data on internet made much of operating software reach open source platform. In summary, application compression of image layers justifies optimization of Djvu formatted document’s storage and transfer.

Full Paper

IJCST/41/4/A-1399
122 A Study on Constructing Synonymous Gene Database from Biomedical Text Documents

B. Vinay Kumar, S. Jayaprada, Dr. S. Vasavi, Dr. P. Bala Krishna

Abstract
Authors frequently use dissimilar names to refer to the same gene or protein names across articles. Identifying the alternate names for the same gene/protein would help biologists to find and use relevant literature. Biomedical databases are usually constructed and maintained by domain experts but need more human physical involvement. Many biological databases such as SWISSPROT, GenBank, GOLD, UniGene and Karyn’s Genome include synonyms, but these databases may not be always up-todate. Therefore, it is necessary to automate this process because of the increasing number of discovered genes and proteins. The fast increase of machine readable biomedical text documents leads to the growth of semi computerized or computerized information extraction techniques used to extract meaningful information such as extraction of synonymous Gene or protein names. This paper studies existing methods for identifying these name variations and proposes a new method by treating Gene Synonym Identification problem as information extraction problem.
Full Paper

IJCST/41/4/A-1400
123 Business Artificial Intelligence Techniques for Enterprise Systems Forecasts: A Survey

Shrimant B. Bandgar

Abstract

Business Artificial Intelligence (BAI) is the process of transforming raw data into useful information for more effective strategic, operational insights, and decision-making purposes with Artificial Intelligence agent so that it yields real business benefits. This new emerging technique can not only improve applications in enterprise systems and industrial informatics, respectively, but also play a very important role to bridge the connection between enterprise systems and industrial informatics. This paper was intended as a short introduction to BAI with the emphasis on the fundamental techniques, fundamental algorithms and recent progress. In addition, we point out the challenges and opportunities to smoothly connect industrial informatics to enterprise systems for BAI research.

Full Paper

IJCST/41/4/A-1401
124 Providing an Efficient Multi Tier Security in
Cloud Computing

Dr. T.Swarnalatha, A.S.Gousia Banu

Abstract

The use of cloud computing has increased rapidly in many organizations. Cloud computing provides many benefits in terms of low cost and accessibility of data. Ensuring the security of cloud computing is a major factor in the cloud computing environment, as users often store sensitive information with cloud storage providers but these providers may be untrusted. Dealing with “single cloud” providers is predicted to become less popular with customers due to risks of service availability failure and the possibility of malicious insiders in the single cloud. A movement towards “multi-clouds”, or in other words, “interclouds” or “cloudof- clouds” has emerged recently.
Full Paper

IJCST/41/4/A-1402
125 Robust controller Design by Multi-Objective Optimization with H2/H/μ combination
Javad Mashayekhi Fard, Mohammad Ali Nekoui, Ali Khaki Sedigh, Roya Amjadifard

Abstract

In a physical system several targets are normally being considered in which each of nominal and robust performance has their own strengths and weaknesses. In nominal performance case, system operation without uncertainty has decisive effect on the operation of system, whereas in robust performance one, operation with uncertainty will be considered. Every target may cause intensive limitation on the rank of the controller matrix and even on its response. The purpose of this paper is to present a balance approach between nominal and robust performance of the state feedback. This new approach uses the combination of two controllers of μ and H2/H At first, a controller of H2/H will be designed for nominal performance target, robust stability and noise reduction and then μ controller will guarantee the robust performance.
Combination of these two controllers in an appropriate weighting will be the final step of the design. Simulation and comparison studies are used to show the effectiveness and benefits of this method.
Full Paper

IJCST/41/4/A-1403
126 Automated Classification of Schizophrenia
with Neural Networks

Gore Ranjana Waman

Abstract

Schizophrenia is a complex mental disorder. So Identification of Schizophrenic is very important in quantitative biological research. In this paper, I proposed a method of classification of schizophrenia and healthy controls, using a neural network and ICA. A reliable technique for discriminating schizophrenia based upon Functional Magnetic Resonance Imaging (fMRI) would be a significant advance. fMRI technology enables medical doctors to observe brain activity patterns that represent the execution of subject tasks, both physical and mental. The scans were acquired on 1.5T Siemens scanner. The data was preprocessed Using SPM and then ICA is applied to fMRI data, that has been fruitful in grouping the data into meaningful spatially independent components. Work discussed in this paper specifically focuses on fMRI data collected from both healthy controls and patients diagnosed with schizophrenia. The neural networks are trained using the back propagation algorithm, in which the error signals are propagated backward through the network. In a three layer neural network the weights are updated. The output of neural network will be ‘yes’ or ‘no’ i.e. patient is schizophrenic or not. This is how I classify schizophrenic and healthy controls and got better results.
Full Paper

IJCST/41/4/A-1404
127 A Proficient Method for Template Removal

G.Naveen Sundar, D.Narmadha

Abstract

Today’s world is fully dependent on data or information. The major source of information is World Wide Web. Users expect maximum accurate results from search engines. Majority of the web pages contain more unnecessary information than actual contents. As a result, users are deviated from their real intention of getting most relevant information and they are forced to access unnecessary information. The unnecessary information present in web pages are termed as templates. Templates are used for building web pages and it provides a good look for the pages. But its presence leads to poor performance of search engines due to the retrieval of non-contents for users. Therefore the performance of search engines can be improved by making web pages free of templates. It is achieved through the removal of templates from web pages and a number of different approaches are available for it. The paper focuses on detecting and extracting templates from web pages that are heterogeneous in nature by means of an algorithm. A type of hierarchical clustering is used to cluster similar web documents together based on a measure of similarity. An enhancement of the method is also proposed so as to improve performance.
Full Paper

IJCST/41/4/A-1405
128 Paper has been removed due to Copyright Issue
129 Efficiency of Double Guard for Detecting Intrusions in Multi-tier Web Applications

Dr. T.Swarnalatha, A.S.Gousia Banu

Abstract

In today’s world Internet services and applications has become an inextricable part of daily life, enabling communication and the management of personal information and gathering other useful information from anywhere. To overcome this increase in application and data complexity, web services have moved to a multi-tiered design where the front end logic runs on the webserver and data is stored in a database server or file server. A model called “Double Guard”, an IDS system that models the network behaviour of user sessions across both the front-end web server and the back-end database. By monitoring both web and subsequent database requests, we are able to ferret out attacks that an independent IDS would not be able to identify. Furthermore, we quantify the limitations of any multitier IDS in terms of training sessions and functionality coverage.
Full Paper

IJCST/41/4/A-1407
130 Using Direct Seek First Algorithm With Ability and Optimal Rule Allocation for Fading Relay Network

P.J.Arun Kumar

Abstract

Wireless telecommunications is the transfer of information between two or more points that are not physically connected. Distances can be short, such as a few meters for television remote control, or as far as thousands or even millions of kilometres for deepspace radio communications In the existing system we developed, and implemented a compromised router Detection Protocol (DP) that dynamically infers, the number of congestive packet losses (CPL) that will occur. Each and every packet is encrypted so that to prevent the data from eavesdropping. So the data is much secured. We derive the ability and optimal rule allocation scheme for a multi-user fading relay channel in which minimum rates must be maintained for each user in all fading states, assuming perfect channel state information at the transmitter and at all receivers. We first allocate the minimum rule required to achieve the minimum rates in all fading states, and we then optimally allocate the excess rule to maximize rates averaged over all fading states in excess of the minimum rate requirements. The optimal allocation of the excess rule is a multi-level water-filling relative to effective noise that incorporates the minimum rate constraints.
Full Paper

IJCST/41/4/A-1408
131 An Effective Advance Towards the Intrusion Detection of Generative Data Stream

SK. Akbar, M.A.Baseer, Pathangi Srinivas, V. Kishore

Abstract

Intrusion detection is an emerging technology to identify unauthorized users and to cluster different alerts produced by
low-level intrusion detection systems firewalls. Meta-alerts can be generated by a specific attack instance which has been initiated by an attacker at a certain point in time, and the amount of data i.e. alerts can be reduced considerably as the clusters contains all the relevant information. Within the distributed intrusion detection system, the meta-alerts may then be the basis for reporting to security experts or for communication. Based on a dynamic and the probabilistic model of the current attack situation, a novel technique for online alert aggregation is proposed in this paper. Basically, for the estimation of the model parameters, it can be regarded as a data stream version of a maximum likelihood approach. In addition, after observing the first alert belonging to a new attack instance the meta-alerts are generated with a delay of typically only a few seconds.
Full Paper

IJCST/41/4/A-1409
132 Weighted Support Based Mining Association Rules-a Computational Approach Without Pre-Assigned Weights

A Prasanthi, P Srikanth, Gajjala Nirmala Joycee, Ch Bharadwaja

Abstract

Now a day’s association rule mining is one of the important key issues in data mining to extract the information from databases. In this paper, we introduce a new measure w-support, which does not require pre-assigned weights. It takes the quality of transactions into consideration using link-based models. W-support, a new measure of item sets in databases with only binary attributes. These weights are completely derived from the internal structure of the database based on the assumption that good transactions consist of good items. Finally fast mining algorithm is given, and a large amount of experimental results are presented.
Full Paper

IJCST/41/4/A-1410
133 Techniques for the Recognition of Devnagari Script

Mandeep Kaur

Abstract

This paper describes a set of preprocessing, segmentation, feature extraction, classification and matching techniques, which play very important role in the recognition of characters. Feature extraction provides us methods with the help of which we can identify characters uniquely and with high degree of accuracy. So many approaches have been proposed for pre-processing, feature extraction, learning/classification, and post-processing.The objective of this paper is to review these techniques, so that the set of these techniques can be appreciated.
Full Paper

IJCST/41/4/A-1411
134 DRPA: Dynamic Resource Provisioning Administration Model for Cloud Environments Using Gossip Protocol

T.Shampavi, R.Priyadharshini, S.Anjanaa, M.Sujitha

Abstract

Cloud Computing is raising as a new computing paradigm change. Organizing resources at large-volume is a key dispute for cloud Environments. We deal with the difficulty of dynamic resource administration for a large-volume cloud background. Our role relies on a middleware structural design and defining one of its aspects, a gossip protocol. The protocol distributes the resources consistently among services, adapt to the changes in load and, it expands in the number of machines and sites. To deal with variability in resource capacity and application performance in the Cloud, we forecast the job finishing point time delivery that is applicable to making refined decisions in resource scheduling and allocation. The Protocol at first provides a solution without considering CPU and Memory resources. Then the protocol is widen to offer proficient outcome. Here we also extend our proposed work to fault tolerance in resource scheduling problems.
Full Paper

IJCST/41/4/A-1412
135 The Comparisions of Various Multi Proxy Multi Signature Schemes with Shared Verification

Ashwani Malhotra, Raman Kumar

Abstract

Tzeng et al. proposed a threshold multi-proxy multi-signature scheme with threshold verification. Recently, Hsu et al. pointed out that Tzeng et al.’s scheme was vulnerable to insider attacks and proposed an improvement to eliminate the pointed out security leak. They had shown that Hsu et al.’s improvement cannot resist the frame attack. That is, after intercepting a valid proxy signature, an adversary can change the original singers to himself and forge a proxy signature. We will further compare and analyze all these schemes and further propose our scheme which will resist all security leak outs of various schemes.
Full Paper

IJCST/41/4/A-1413
136 A Novel Approach for Detection of Worm

Gannavarapu Ananth Kumar, K. Kavitha, G. Nageswara Rao

Abstract

Active worms spread in an automated fashion and can flood the Internet in a very short time. Modeling the spread of active worms can help us understand how active worms spread, and how we can monitor and defend against the propagation of worms effectively. Many real-world worms have caused notable damage on the Internet. The C-Worm has the capability to intelligently manipulate its scan traffic volume over time, thereby camouflaging its propagation from existing worm detection systems. We analyze characteristics of the C-Worm and conduct a comprehensive comparison between its traffic and non-worm traffic. We observe that these two types of traffic are barely distinguishable in the time domain, however, their distinction is clear in the frequency domain, due to the recurring manipulative nature of the C-Worm. Motivated by our observations, we design a novel spectrum-based scheme to detect the CWorm. Our scheme uses the Power Spectral Density (PSD) distribution of the scan traffic volume and its corresponding Spectral Flatness Measure (SFM) to distinguish the CWorm traffic from non-worm traffic.
Full Paper

IJCST/41/4/A-1414
137 A Novel Approach for Protecting Network from Distributed Denial of Service Attack

S.Shanawaz Basha, B. Vijayalakshmi

Abstract

The nature of the threats posed by Distributed Denial of Service (DDoS) attacks on large networks, such as the Internet, demands effective detection and response methods. These methods must be deployed not only at the edge but also at the core of the network. DDOS attacks are creating major security problems.. In this paper, a robust method is proposed to care for a web server from DDoS attack utilizing some easily reachable information in the server. Through the whole system is not promise to detect the DDoS attack fully and also server within short duration time server quickly shutdown appears, on the same time normal network service cannot possible. The new efficient detection algorithm is used to find the flexible solution for the DDoS Attacks. The efficient detection algorithm is classified different modules. Find out the DDoS attack is very quickly with simple statistical analysis of the network traffic and also very less computational memory overhead on the server. The development of the algorithm is based on the legitimate users are allowed as well as attackers are identified and also deleted. This aspect of behaviour DDoS attacks is not taken into account in numerous of the commercial solutions
Full Paper

IJCST/41/4/A-1415
138 Professionally Maintaining the Protection & Integrity in Cloud Computing

M.Rajasekhar, P. Rajyalakshmi

Abstract

Cloud computing requires companies and individuals to transfer some or all control of computing resources to Cloud Service Providers (CSPs). Such transfers naturally pose concerns for company decision makers. In a recent 2010 survey by Fujitsu Research Institute [1] on potential cloud customers, it was found that 88% of potential cloud consumers are worried about who has access to their data, and demanded more awareness of what goes on in the backend physical server. To overcome the above problems, we propose a novel approach, namely Cloud Information Accountability (CIA) framework, based on the notion of information accountability, apart from this ARIES algorithm is used to recover loging information.
Full Paper

IJCST/41/4/A-1416
139 A Novel Approach for Text Classification

S.Shanawaz Basha, L.Sunitha Rani

Abstract

Text Classification (TC) is the process of associating text documents with the classes considered most appropriate, thereby distinguishing topics such as particle physics from optical physics.A lot of research work has been done in this field but there is a need to categorize a collection of text documents into mutually exclusive categories by extracting the concepts or features using supervised learning paradigm and different classification algorithms. In this paper, a new Fuzzy Similarity Based Concept Mining Model (FSCMM) is proposed to classify a set of text documents into pre – defined Category Groups (CG) by providing them training and preparing on the sentence, document and integrated corpora levels along with feature reduction, ambiguity removal on each level to achieve high system performance. Fuzzy Feature Category Similarity Analyzer (FFCSA) is used to analyze each extracted feature of Integrated Corpora Feature Vector (ICFV) with the corresponding categories or classes. This model uses Support Vector Machine Classifier (SVMC) to classify correctly the training data patterns into two groups; i. e., + 1 and – 1, thereby producing accurate and correct results. The proposed model works efficiently and effectively with great performance and high – accuracy results.
Full Paper

IJCST/41/4/A-1417
140 Optimized Search Results Based on Bin Rank and Hub Rank

Injamuri Mallikharjuna Rao, Medara Rambabu, K. Phani Kumar, S.Y.Pavan Kumar

Abstract

With the remarkable growth of information obtainable to end users through the web, search engines come to play ever a more significant role. The search engines sometimes give disappointing search results for lack of any classification of search. Credibility of information should be a key-metric for search page results. Existing search algorithms, such as Object Rank and personalized Page Rank are used to provide high quality, high recall search in Web databases but they have huge computation overhead over full graph and are not feasible at query time. Later, BinRank system was developed that approximates Object Rank a result was developed earlier. BinRank generates the materialized sub graphs(MSG) by partitioning all the terms in the corpus(information results) based on their co-occurrence, and then executing Object Rank for each partition using the terms to generate a set of random walk starting points, and keeping only those objects that receive nonnegligible scores. But the limitations of Object Rank concerning the query time still persists. In proposed system, we use HubRank a Query optimization and index management technique especially for graphs as an alternate to Object Rank. Along with the hub rank, we use bin rank. A subgraph contains all objects and links relevant to a set of related terms should have all the information needed to rank objects with respect to one of these terms. This approach achieves better results than existing search techniques. So, the proposed system reduces the query time and has significant performance boost over smaller graphs such as MSG.
Full Paper

IJCST/41/4/A-1418
141 ANTI Phishing Framework with Visual Cryptographic and Dynamic Captcha Schemes

G. Pavan, N. Sameera

Abstract

Phishing is a routine method of online identity theft and virus spreading using forged web pages. For phishing detection and prevention, previously a new methodology to detect the phishing website based on the Anti-Phishing Image Captcha validation scheme using visual cryptography is proposed. It prevents leakage of password and other confidential information to the phishing websites. Prior systems used visual cryptographic schemes to counter phishing pages where one secret captcha image share resides with user and the other secret shares reside in server. During authentication a genuine server forwards its share and the user forwards his share resulting in a secured access to the system via a reconstructed captcha. The reconstructed captcha always happens to be same and is prone to character recognition based attacks. To overcome this problem we propose to use visual cryptographic schemes to counter phishing and an interactive captcha to counter character recognition based attacks. By recording CAPTCHA solving time on a per-character basis, we propose to use Single Slow Response Detection Algorithm, Two Consecutive Slow Responses Detection Algorithm, and Dynamic Detection Threshold Algorithms for CAPTCHA enables a server detect and reject 3rd party human attacks in ways not possible with existing CAPTCHAs. Combined with visual cryptographic schemes, slow response and dynamic activity detection algorithms along with round trip mechanisms using ajax procedures we offer a dynamic captcha that can thwart all possible authentication threats. A practical implementation of the proposed system validates our claim.
Full Paper

IJCST/41/4/A-1419