IJCST Logo

 

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY (IJCST)-VOL IV ISSUE IV, VER. 3, OCT. TO DEC, 2013


International Journal of Computer Science and Technology Vol. 4 Issue 4, Ver. 3
  S.No. Research Topic Paper ID
    71 A Secured DSH Algorithm for MPI in Parallel Computing
M.Nagalakshmi, A.Veerabhadra Rao, K.Sathi Reddy

Abstract

Parallel computing on clusters of workstations and personal computers has very high potential. In fact, recently, a MPI has been proposed as an industrial standard for writing portable message passing parallel programs. We focus on MPI as it is one of the most popular communication protocols for parallel computing on clusters. Since this communication run on public networks, security sensitive became a primary concern. To tackle this security issue, we develop a method DSHA – Digital Signature Hashing Algorithm for Message Passing Interface (MPI) implementation to preserve confidentiality of messages communicated among nodes of clusters. Here we integrated DSHA algorithms into the MPICH2 library with the standard MPI interface. MPI-application programmers can fully configure any confidentiality services in MPICHI2, because a secured configuration file in DSHA offers the programmers flexibility in choosing any cryptographic schemes and keys seamlessly. Our experiments show that overhead incurred by the confidentiality services in DSHA is marginal for messages. It also shows that security overhead can be significantly reduced in DSHA by high-performance clusters.
Full Paper

IJCST/44/3/D-1802
    72 A Survey on Semantic Web Search and Technologies
K.Palaniammal, Dr.S. Vijayalakshmi

Abstract

This paper present an overview of the existing literature on semantic web search and technologies. Semantic search is the field of Web search, which is distinct from the traditional information retrieval methods. The purpose of the semantic search is to assist users to denote their search intentions and assist search engines to understand the meaning of users’ queries in terms of semantic web technologies. We understand this concepts from a broader sense, in that semantic web search refers to using semantic technologies for information retrieval. In this paper, we surveyed semantic web search methods and engines are analyzed and concluded based on some common perspectives.
Full Paper

IJCST/44/3/D-1803
    73 Machine Learning Based System to Predict Online Punjabi Signature Verification
Ankita Wadhawan, Dinesh Kumar

Abstract

Signature is a behavioral trait used in the area of biometric authentication. On-line signature verification requires the use of digitizing tablet and a stylus that are connected to the Universal Serial Port (USB) of computer. Signatures are collected with the help of pressure sensitive pen and tablet having working area of 4 x 3 inches. As numbers of systems are there for on-line signature verification of foreign language, no work has been done for on-line signature verification of Indian language. This paper deals with the on-line signature verification of Punjabi signatures. Signatures of each individual written in Punjabi language are stored in the binary format in database as txt file. Feature vector of each signature consists of minimum pressure, average pressure, maximum pressure, signature length, total signing time and total pen down count. The input Punjabi signatures are recognized by using support vector machines (SVM) and the performance was explored by using radial basis function (RBF). Experimental results show the incremental growth in accuracy by adding more and more features together. The proposed system exhibits the accuracy of 85% by combining all the features together.

Full Paper

IJCST/44/3/D-1804
    74 Network Designing based on OSPF & EIGRP Routing Protocol & Their Comparison Using Network SimulatorShweta Rathour, Kanika Singhal, Vibhor Rathour

Abstract

Computer simulation is the tool of choice for developing and testing new technologies and techniques. It is also commonly used to help teach and train students and other professionals. The approach is based on a network simulation program using GNS3 (IOS Based) simulator whose purpose is to develop, test & comparison of current networking protocols that is used for make a network design. The implemented routing protocol used is Open Shortest Path First (OSPF) [1], which is the most commonly used interior gateway protocol (IGP), but the network simulator could be updated and its functionality expanded by the addition of other network layer protocols and any other protocols of the other layers of the Open Systems Interconnection (OSI) [2] Reference Model upon their implementation. Another routing protocol which is used for network design is Enhanced Interior Gateway Routing Protocol (EIGRP) [4]. It has some characteristics similar to those of a link-state routing protocol.. It is an efficient, although proprietary, solution to networking large environments as it scales well. Its ability to scale is, like OSPF, dependent on the design of the network.

Full Paper

IJCST/44/3/D-1805
    75 CSA: A New Metaheuristic Approach For Data Clustering
Soumya Sahoo, Bijayalaxmi Panda, Sovan kumar Pattnaik

Abstract

Data mining or knowledge discovery is the process of analyzing data from different dimensions and summarizing it into useful information. It allows users to analyze data from many different angles, categorize it, and summarize the relationships in between. In computational intelligence various algorithms, applications with hybridization have been applied in data mining world for optimization solution. In data mining grouping of data and classifications are important part. The K-means algorithm is the most commonly used partitioned clustering algorithm because it can be easily implemented.Several researchers have been demonstrated and proven the outperformance of Cuckoo Search (CS) in many sphere of engineering and computer science.Thus we have applied cuckoo search algorithm for clustering and showed that how it avoids from being trapped in a local optimal solution.

Full Paper

IJCST/44/3/D-1806
    76 Reactive and Proactive Routing Algorithm Selection Through Simulation of Various Parameters in Applications Based on MANETS
Neetu Sharma, Rajkumar Mahirania, Chandra Prakash Verma

Abstract

A framework has been presented for the use of proactive and reactive protocols in scenario of mobile ad hoc network. there has been a rapid development in the field of mobile networks and mobile computing because the wide use and widely used mobile devices. MANET have very enterprising use in emergency scenarios like military operations & disaster relief operation where there is need of communication network immediately following some major event, or some temporary requirement like conference & seminar at new place where there is no earlier network infrastructure exist and need alternative solution. So it will be beneficial to select routing protocol on the basis of application. In this paper, this process of selection is analyzed between proactive and reactive AODV and DSDV on the basis of parameters: traffic, number of nodes, and packet delivery ratio (PDR), packet lost fraction (PLF), pause time and max speed.

Full Paper

IJCST/44/3/D-1807
    77 Trust Energy Aware Routing Protocol Implementation in Wireless Sensor Network
A. Sivaprasad, T.Nagamani

Abstract

Wireless Sensor Networks (WSN) is an emerging technology and have great potential to be employed in critical situations like battlefields and commercial applications such as building, traffic surveillance, habitat monitoring and smart homes and many more scenarios. One of the major challenges wireless sensor networks face today is security. In this paper, we propose a trust and energyaware, location-based routing protocol called Trust and Energy aware Routing (TER) protocol. TER uses trust values, energy levels and location information in order to determine the best paths towards a destination. The protocol achieves balancing of traffic load and energy, and generates trustworthy paths when taking into consideration all proposed metrics. Other metrics can be easily integrated in the protocol.

Full Paper

IJCST/44/3/D-1808
    78 Enhancing the Throughput in Mobile Backbone Networks
G.Satish, P. Prem Kumar

Abstract

The main aim of this paper is to have hierarchical communication framework to combine the mobile backbone nodes, which have superior mobility and communication capability, with regular nodes, which are constrained in mobility and communication ability. This is achieved with the results shown in this paper by using the new algorithms proposed for optimizing the throughputs in a mobile backbone networks which were proposed by Emily M. Craparo, Jonathan P. How and Eytan Modiano. An important quantity of attention in mobile backbone networks is the number of regular nodes that can be successfully assigned to mobile backbone nodes at a given throughput level. This paper develops a novel technique for maximizing this quantity in networks of fixed regular nodes using mixed-integer linear programming (MILP). The MILP-based algorithm provides a significant reduction in computation time compared to existing methods and is computationally tractable for problems of moderate size. An approximation algorithm is also developed that is appropriate for large-scale problems. This paper presents a theoretical performance guarantee for the approximation algorithm and also demonstrates its empirical performance. Finally, the mobile backbone network problem is extended to include mobile regular nodes, exact and approximate solution algorithms are presented for this extension.

Full Paper

IJCST/44/3/D-1809
    79 A New access to Confidential Storage of Data Spotlight: Slicing
SK.Salamuddeen, M.Rambabu, P.Suresh Babu

Abstract

Several anonymization techniques, such as generalization and bucketization, have been designed for privacy preserving micro data publishing. Recent work has shown that generalization loses considerable amount of information, especially for highdimensional data. Bucketization, on the other hand, does not prevent membership disclosure and does not apply for data that do not have a clear separation between quasi-identifying attributes and sensitive attributes. In this paper, we present a novel technique called slicing, which partitions the data both horizontally and vertically. We show that slicing preserves better data utility than generalization and can be used for membership disclosure protection. Another important advantage of slicing is that it can handle high-dimensional data. We show how slicing can be used for attribute disclosure protection and develop an efficient algorithm for computing the sliced data that obey the ℓ-diversity requirement. Our workload experiments confirm that slicing preserves better utility than generalization and is more effective than bucketization in workloads involving the sensitive attribute. Our experiments also demonstrate that slicing can be used to prevent membership disclosure.

Full Paper

IJCST/44/3/D-1810
    80 Class Timetable Scheduling with Genetic Algorithm
Rajaram H. Ambole, Dinesh B. Hanchate

Abstract

Genetic Algorithm is population based heuristic method extensively used in scheduling applied for constraint optimization problem. In Genetic Algorithm we generated random solutions rst called as populations and then try to generate feasible solution with the help of operations like selection, crossover and mutation. Scheduling class timetable is common scheduling problem in which a set of events is to be arranged in available timeslots along with limited resources and constraints. Selection of solutions is an important function in genetic algorithm which decides the quality of ospring generated. Selection is a way to make space for good solution to in and bad solution to out from population. In this project, we worked with small instance of timetable problem which required scheduling 100 events in 45 timeslots. We used tournament selection II and tournament selection V to check the quality of ospring generated. We found considerable improvement in solution generated with tournament selection V.

Full Paper

IJCST/44/3/D-1811
    81 Boolean Revival for Deft Protracted
M.Srivani, K.Vasanth Kumar, P.Suresh Babu

Abstract

This paper gives a modern research model for extracting patterns and relations visually from multidimensional binary data using monotone Boolean functions. With the growth of massive digital data archives, which are not necessarily organized in any order, the twin and complementary processes of information retrieval and data mining have emerged together as a particular important discipline within the information sciences. The object of information retrieval is to automatically search a data archive in order to respond to a user’s query. The object of data mining, on the other hand, is to automatically process a data archive in order to find patterns that represent knowledge or, equivalently, information interesting to the user (not necessarily in response to a targeted query). Information retrieval and data mining invoke multidisciplinary techniques, including those from artificial intelligence, statistics, machine learning, pattern analysis, and others.

Full Paper

IJCST/44/3/D-1812
    82 Diminishment BFS in Finding Multi-keyword
P.M.S.K. Prasad, V. Sangeeta, P. Suresh Babu

Abstract

Peer-to-Peer multi-keyword searching requires distributed intersection/union operations across wide area networks, raising a large amount of traffic cost. Existing schemes commonly utilize Bloom Filters (BFs) encoding to effectively reduce the traffic cost during the intersection/union operations. In this paper, we address the problem of optimizing the settings of a BF. We show, through mathematical proof, that the optimal setting of BF in terms of traffic cost is determined by the statistical information of the involved inverted lists, not the minimized false positive rate as claimed by previous studies. Through numerical analysis, we demonstrate how to obtain optimal settings. To better evaluate the performance of this design, we conduct comprehensive simulations on TREC WT10G test collection and query logs of a major commercial web search engine. Results show that our design significantly reduces the search traffic and latency of the existing approaches.

Full Paper

IJCST/44/3/D-1813
    83 Artifacts Reduction in JPEG Images Using APDCT Filter
Puneet Malhotra

Abstract

When the image is compressed at very low bit rates, then the reconstructed images from JPEG compression produce some visual degradation near the block boundaries. An improved postprocessing approach applied in spatial and frequency-domain is proposed in this paper. Here we propose an approach in which the post processing is performed in both spatial and frequency domain. In spatial domain we applied the APDCT low-pass, anti-aliasing digital filter [17] to restrict the high-frequency components and then in frequency domain we implemented the approach proposed in [14]. Experimental results show that the proposed approach provides satisfactory performance as compared to results obtained by implementing approach proposed in [14].

Full Paper

IJCST/44/3/D-1814
    84 Ranking Model Adaptation for Domain-Specific Search
T Tulasi Gangadhar, Kamalakar Meduri, Sadhana Kodali

Abstract

With the explosive emergence of vertical search domains, applying the broad-based ranking model directly to different domains is no longer desirable due to domain differences, while building a unique ranking model for each domain is both laborious for labeling data and time-consuming for training models. In this paper, we address these difficulties by proposing a regularization based algorithm called ranking adaptation SVM (RA-SVM), through which we can adapt an existing ranking model to a new domain, so that the amount of labeled data and the training cost is reduced while the performance is still guaranteed. Our algorithm only requires the Prediction from the existing ranking models, rather than their internal representations or the data from auxiliary domains. In addition, we assume that documents similar in the domain-specific feature space should have consistent rankings, and add some constraints to control the margin and slack variables of RA-SVM adaptively. Finally, ranking adaptability measurement is proposed to quantitatively estimate if an existing ranking model can be adapted to a new domain. Experiments performed over Letor and two large scale datasets crawled from a commercial search engine demonstrate the applicability of the proposed ranking adaptation algorithms and the ranking adaptability measurement.

Full Paper

IJCST/44/3/D-1815
    85 Successful Pattern Invention for Text Mining
S. Thillai Nayagi, N.Tulasi Radha, P.Suresh Babu

Abstract

This paper describes the applications of pattern discovery for data mining. Data mining techniques is main objective for mining useful patterns in text documents. The effective use and updating in discovered patterns is still a critical research problem. But most existing models in text mining methods adopted term-based approaches; polysemy and synonymy are the suffering problems. Many years, people have stick on the hypothesis that pattern based approaches should perform accurate term-based approaches, but many experiments do not support this hypothesis. This research paper gives a modernization and effective pattern discovery technique, which includes the processes of pattern deploying and pattern evolving, to improve the effectiveness of using and updating discovered patterns for finding relevant and interesting information. Substantial experiments on Reuters Corpus Volume 1 data collection and Text Retrieval Conference (TREC) filtering topics will exhibit the proposed solution.

Full Paper

IJCST/44/3/D-1816
    86 Search Hop Count Routing in Peer-to-Peer Networks With Performance Guarantees
L.C.Usha Maheswari, M.Jyothi, P.V.S. Srinivas

Abstract

Unstructured Peer-to-Peer (P2P) file-sharing networks are popular in the mass market. As the peers participating in unstructured networks interconnect randomly, they rely on flooding query messages to discover objects of interest and thus introduce remarkable network traffic. Empirical measurement studies indicate that the peers in P2P networks have similar preferences, and have recently proposed unstructured P2P networks that organize participating peers by exploiting their similarity. The resultant networks may not perform searches efficiently and effectively because existing overlay topology construction algorithms often create unstructured P2P networks without performance guarantees.

Full Paper

IJCST/44/3/D-1817
    87 A Secure Intrusion Detection System Against DDOS Attack in Wireless Mobile Ad-Hoc Network
Sadhu Swathi

Abstract

In recent years, the Wireless Mobile Ad-Hoc Network (MANET) is plays vital role. The security is the major challenges for wireless mobile ad-hoc networks, because no central controller exists. In any direction the MANET is free to move independently, and will change its links to other frequent devices. Without its own use, each MANET must forward traffic unrelated, and it must be a router. The primary challenge in building a MANET is equipping each device to continuously maintain the information required to properly route traffic. Those networks may operate by themselves or may be connected to the larger Internet. Ad hoc contains wireless sensor network obviously the problems is facing by sensor network is also faced by MANET. In the sensor nodes development in unattended environment increases the chances of various attacks. There are many security raids in MANET and DDoS is one of them. In this paper, we mainly aiming on seeing the effect of DDoS in routing load, packet drop rate, end to end delay. With use of these parameters and many more also we build secure IDS to detect this kind of raid and block it.

Full Paper

IJCST/44/3/D-1818
    88 Variegated Divulging Fusion for Image Procurement
A Gopala Rao, U V Chandra Sekhar

Abstract

By adapting to lights in any viewing condition, the human visual system can capture a wide dynamic range of irradiance (about 14 orders in log unit), while the dynamic range of CCD or CMOS sensors in most of today’s cameras does not cover the perceptional range of real scenes. It is important in many applications to capture a wide range of irradiance of natural scene and store it in each pixel. In the application of CG, a High Dynamic Range Image (HDRI) is widely used for high quality rendering with image based lighting. Nowadays HDR imaging technologies have been developed and some high dynamic range sensors are commercially available. They are used for in-vehicle cameras, surveillance in night vision, and camera-guided aircraft docking high contrast photo development, robot vision etc. In the last decade, to capture the HDRI, many techniques have been proposed based on the multiple-exposure principle, in which the HDRI is constructed by merging some photographs shot with multiple exposures. Many of the techniques assume that a scene is static during taking photographs.

Full Paper

IJCST/44/3/D-1819
    89 Software Project Planning Using Ant Colony Optimization (SPP-ACO)
Kishor N Vitekar, Dinesh B Hanchate

Abstract

Software Project Scheduling Problem (SPSP) is a problem of task and employee scheduling by satisfying soft and hard constraint. Number of techniques are designed to solve SPSP problem. It includes Genetic Algorithm (GA), Tabu Search (TS). SPSP problem is solved by using these technique also. These all techniques are a Meta-heuristic techniques. A Novel Ant Colony Optimization (ACO) is a new Meta heuristic technique that are used to solve SPSP problem. For solving SPSP problem heuristic information as well as pheromone value is used. Heuristic information is calculated by using previous solution, eort, importance of task and allocation of employees. Six dierent strate- gies are used to calculate heuristic information. For solving SPSP problem construction graph is created, construction graph includes all possible path from one task to another task. Pheromone value is based on probability of maximum selected path in construction graph. The Experimental results of ACO solution shows that this approach gives best result. The proposed algorithm is very eective and ecient as well as very promising for solving SPSP problem and it achieves more accurate result.

Full Paper

IJCST/44/3/D-1820
    90 Cascade Participation Index With CSTP Miner
G.Vani, V.Durgaprasadarao, P.Suresh Babu

Abstract

The trajectory of any dynamic object is associated with space and time. The idea of motion would turn meaningless if time is separated from space or vice versa. A novel cascade of space and time was made and named as patiotemporal (ST) in the domain of data mining. Partially ordered sets from a given Boolean ST set are considered to mine patterns. A Cascade Participation Index (CPI) is computed through bottleneck analysis over the data set to measure user interest. Based on this value and the directed graph representation the filtering of irrelevant Cascade Spatiotemporal Patterns (CSTP) can be done having the miner nested in the process. The result has to be extended to real time data sets to prove validation of content.

Full Paper

IJCST/44/3/D-1821
    91 Optimized Caching Schemes for Consistency Maintenance in Hybrid P2P Networks
P.Prem Chand, K.Venkatesh

Abstract

Present days a distributed data sharing is the main aspect in network communications. For distributed data sharing Peer-to-Peer networks are introduced. Peer-to-peer networks are widely divided in two categories: first one structured Peer-to-Peer networks, in which they are maintain a regular network topology, whereas unstructured Peer-to-Peer networks are maintain arbitrary topology construction. Due to the advantages and disadvantages present in both structured and unstructured peer-to-peer networks traditionally develop an Hybrid Peer-to-Peer system. In hybrid peer-to-peer networks, usage of Adaptive File Consistency Algorithm (AFCA) along Optimized files caching (OFC) Algorithm to provide robust services even during hot file requests. These techniques have high computation overhead and implementing cost and file consistency between file and its replica is indispensable to file replication. So we propose to use an update method polling method of AFCA that adapts to time varying file request by considering specialized replacement policies instead of polling file owners at random intervals approach of OFC duo to the non-static dedicated caches, where peers may leave the network without constraints. Our experimental results show delivery of lower query delay and high cache heat ratios and it effectively relieves the over caching problem.
Full Paper

IJCST/44/3/D-1822
    92 Reversible Data Hiding Scheme Using Binary Feature Sequence
K.Naveen, Y. Sowjanya Kumari, Dr.P.Harini

Abstract

Lossless data embedding has the property that the distortion due to embedding can be completely removed from the watermarked image without accessing any side channel. Very important property whenever serious concerns over the image quality and artifacts visibility arise. We use the general principles as guidelines for designing efficient and high-capacity lossless embedding methods for three most common image format paradigms – raw, lossy or transform formats (JPEG), palette formats (GIF, PNG), and uncompressed formats (BMP). We present a novel lossless (reversible) data-embedding technique that enables the exact recovery of the original host signal upon extraction of the embedded information. We generalize the method in our previous paper using a decompression algorithm as the coding scheme for embedding data and prove that the generalized codes can reach the rate–distortion bound as long as the compression algorithm reaches entropy. We improve three reversible data hiding (RDH) schemes that use binary feature sequence as covers in our proposed system: a RS scheme for spatial images, one scheme for JPEG images, and a pattern substitution scheme for binary images. We also apply this coding method to one scheme that uses HS by modifying the histogram shift (HS) manner.

Full Paper

IJCST/44/3/D-1823
    93 Realistic Software Applications for Testing Longer Sequences
M.Pandu Rangeswara Rao, Y.Chitti Babu, Dr.P.Harini

Abstract

Software testing benchmarks that having slightly longer sequences improve the systematic results. Internal states are not only present in object-oriented 5 software, but also in procedural software (e.g., static variables in C programs). In the literature, there are 6 many techniques to test this type of software. However, to the best of our knowledge, the properties related 7 to the choice of the length of these sequences have received only little attention in the literature. The role of the test sequence length has received only little attention in the literature. This paper gives the important contribution to rigorously support its importance. For providing more and more software sequence in testing procedures in real time sequences. In this paper we propose realistic software applications for testing of longer sequence. For experimental results show efficient data sequence in testing frequencies.

Full Paper

IJCST/44/3/D-1824
    94 User Specific Image Searching
Chelli Sunil Kumar, Y Sowjanya Kumari, Dr. P Harini

Abstract

Increasingly developed social sharing websites, like Flickr and Youtube, allow users to create, share, annotate and comment medias. The large-scale user-generated meta-data not only facilitate users in sharing and organizing multimedia content, but provide useful information to improve media retrieval and management. Personalized search serves as one of such examples where the web search experience is improved by generating the returned list according to the modified user search intents. In this paper, we exploit the social annotations and propose a novel framework simultaneously considering the user and query relevance to learn to personalized image search. The basic premise is to embed the user preference and query-related search intent into user-specific topic spaces. Since the users’ original annotation is too sparse for topic modeling, we need to enrich users’ annotation pool before user-specific topic spaces construction. The proposed framework contains two components: (1) A Ranking based Multi-correlation Tensor Factorization model is proposed to perform annotation prediction, which is considered as users’ potential annotations for the images; (2) We introduce User-specific Topic Modeling to map the query relevance and user preference into the same userspecific topic space. For performance evaluation, two resources involved with users’ social activities are employed. Experiments on a large-scale Flickr dataset demonstrate the effectiveness of the proposed method.

Full Paper

IJCST/44/3/D-1825
    95 Object Centred Logging Mechanism Approach for Data and Policies in Cloud Computing
Neha Sharma, Amandeep Kaur

Abstract

Cloud Computing is internet based computing whereby information, IT resources and software applications are provided to computers and mobile devices on demand. It enables highly scalable services to be easily consumed over the Internet on an as needed basis. As cloud computing is expanding rapidly, data security issues in the cloud is major concern. To retain confidential data leakage and loss of privacy in the cloud, this paper propose an object centred mechanism where firstly authentication is done and only authorized users can access data through an intermediator CSP. This provides centralized access of data with more security and database distributed authentication. This mechanism increases security as CSP obtains data from cloud on user and owner request.

Full Paper

IJCST/44/3/D-1826
    96 Consistency Protocol for DNS
K.Sri harsha, V.V. Syam

Abstract

The Effective caching in DNS is critical to its performance and scalability. Existing DNS only supports weak cache consistency by using the Time-To-Live (TTL) mechanism, which functions reasonably well in normal situations. However, maintaining strong cache consistency in DNS as an indispensable exceptional handling mechanismhas become more and more demanding for three important objectives: (1) to quickly respond and handle exceptional incidents, such as sudden and dramatic Internet failures caused by natural and human disasters, (2) to adapt increasingly frequent changes of IP addresses due to the introduction of dynamic DNS techniques for various stationed and mobile devices on the Internet, and (3) to provide finegrain controls for content delivery services to timely balance server load distributions. With agile adaptation to various exceptional Internet dynamics, strong DNS cache consistency improves the availability and reliability of Internet services. In this paper, we propose a proactive DNS cache update protocol, called DNScup, running as middleware in DNS nameservers, to provide strong cache consistency for DNS. The core of DNScup is a dynamic lease technique to keep track of the local DNS nameservers, whose clients need cache coherence to avoid losing service availability.

Full Paper

IJCST/44/3/D-1827
    97 MK-Auth – Matrix Code Challenge Response Authentication for Web Services
Mubarak Mullani, D. V. Kodavade

Abstract

The MK-Auth is an optical challenge-response authentication solution for web services where a camera phone is used as a secure hardware token together with a web camera to provide fast, simple, highly available and secure authentication. The MKAuth is suitable for authentication in central identity management systems, such as OpenID. The previous algorithms, which are used for authentication, employ DES for encryption of data. In recent years, the cipher has been superseded by the Advanced Encryption Standard (AES). We have demonstrated the performance of the MK-Auth algorithm using case studies.

Full Paper

IJCST/44/3/D-1828
    98 Improving Network Performance by Packet Perdition at the Network Edge
Swamy. Jayaprada, Isunuri. Bala Venkateswarlu

Abstract

Network Congestion like traffic jams in big cities are becoming real threats to the growth of the network .Interconnection and communication applications. We have large number of control schemes, But some of them like Digital ‘s Network Architechture(DNA) is implemented in real time systems. Based on the control decisions we have 1)open loop controller-In which control decision does not depend on the feed back signal .Ex-CSFQ algorthim.2)Closed loop controller-Here the control decision depends on the feed back signal Ex-TBCC algorthim.But Any one of the algorithm is not sufficient to control the output and input effectively. Even a small packet loss may cause major problem to the remaining applications or packets for processing. In this paper we proposed a new algorthim called Stable Token Limited Congestion Control (STLCC) which combines both the algorthims TLCC and XCP with effective RED algorithm to improve performance.TLCC controls the input but output tends to oscillate. So to make the output stable we used the XCP which shapes the output. This can improve the speed of the network also.

Full Paper

IJCST/44/3/D-1829
    99 An Approach of Modified ECLARANS for Efficient Outlier Detection
Monika Kanojiya, Prateek Gupta

Abstract

There are several techniques and algorithms are used for extracting the hidden patterns from the large data sets and finding the relationships between them. Clustering is one of the important techniques in data mining. Clustering algorithms are used for grouping the data items based on their similarity the goal of clustering is to group sets of objects into classes such that similar objects are placed in the same cluster while dissimilar objects are in separate clusters. Outlier Detection is a very important research problem in data mining. Clustering algorithms are used for detecting the outliers efficiently The algorithms used in this research work are PAM (Partitioning around Medoid), CLARA (Clustering Large Applications) AND CLARANS (Clustering Large Applications Based on Randomized Search) and a new clustering algorithm ENHANCED CLARANS for detecting outliers. In order to find the best clustering algorithm for outlier detection several performance measures are used. The experimental results show that the outlier detection accuracy is very good in the ECLARANS clustering algorithm compared to the existing algorithms. It has a very high accuracy but still it takes time to be accurate. So by this research work this can also be done. The aim of this research is to reduce the time complexity of the ECLARANS.

Full Paper

IJCST/44/3/D-1830
    100 Nymble: Blocking Misbehaving Users in Anonymizing Networks
G. Menaka, Kiran Kalla

Abstract

Anonymizing networks allow users to access internet services privately by using a series of routers to hide the client’s IP address from the server. users employing this anonymity for abusive purposes such as defacing popular web sites. Web sites administrators routinely rely on IP-address for blocking or disabling access to misbehaving users but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As a result administrators block all known exit nodes of anonymizing networks denying anonymous access misbehaving and behaving users alike. To address this problem we present Nymble, a system in which servers can “blacklist” misbehaving users thereby blocking users without compromising their anonymity. Our system is thus agnostic to different server definitions of misbehavior servers can blacklist users for whatever reason and the privacy of blacklisted users is maintained.

Full Paper

IJCST/44/3/D-1831
    101 Implementation of Accountability on Data in Cloud
Md. Noushad, K.Venkata Kiran, P.Suresh Babu

Abstract

Cloud computing provides scalable resources through the internet as and when the resource is required. The users are not bothered about the infrastructure and location of the data where it was stored. Users can feel that the resources are rightly available in their local machine. But one fear of the users is that loss of control of their own data which became a barrier for the familiarization of the cloud service. This problem can be addressed by tracking the usage of data by the users data in the cloud. The proposed object oriented approach enables logging mechanisms with user data dynamically. Security, access control and logging can be maintained by authentication and automatic logs. Distributed audits across the cloud strengthen the user control.

Full Paper

IJCST/44/3/D-1832
    102 A New Reduction Technique on Structural Test Data Generation
Sreeinivasulu Bolla, AVS Sudhakara Rao

Abstract

The optimization process can be done the space of potential inputs. The search space of potential inputs can be very large, even for very small systems under test. A static dependence analysis derived from program slicing that can be used to support search space reduction. Input domain reduction is one of the search base test generation. These results are provide evidence to support the claim that input domain reduction has a significant effect on the performance of local, global and hybrid search, while a purely random search is unaffected. But these processes only consider the effects of input domain reduction on the generation of test data and not its evaluation. we introduce three algorithms which do this without compromising coverage achieved. We present the results of an empirical study of the effectiveness of the three algorithms on five benchmark programs containing non trivial search spaces for branch coverage. The results indicate that it is, indeed, possible to make reductions in the number of test cases produced by search based testing, without loss of coverage.

Full Paper

IJCST/44/3/D-1833
    103 Global Business Environment and International Market
Dr. Abhishek Gupta

Abstract

A business firm is not isolated from the environment in which it operates. Its future development, the results it can achieve and the constraints within which it operates are all functions of the business environment. The business environment consists of all factors inside and outside the company which influence the firm’s competitive success. It is often divided into the external macro environment, the external industry environment, and the internal firm environment. After reading this research you should be able to understand the significance of the external business environment for the strategies of multinational firms; understand the influence of the political, economic, social, and technological business environment on global and international strategy; conduct a PEST analysis; Apply Michael Porter’s Diamond Model to an industry.

Full Paper

IJCST/44/3/D-1834
    104 Quality Requirements for Online Information Systems’ Development
Ravi Kumar Sachdeva, Dr. Sawtantar Singh, Dr. Jai Prakash

Abstract

The development of online information systems is quite different from software engineering. The development of such systems is a combination of various other fields like software engineering, information engineering, web engineering, usability engineering etc. Hence, these systems have their own quality requirements which have arrived due to interdisciplinary nature of such systems. The purpose of this paper is to present the quality requirements for online information systems development.

Full Paper

IJCST/44/3/D-1835