IJCST Logo



 

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY (IJCST)-VOL III ISSUE III, VER. 3, JULY TO SEPTEMBER, 2012


International Journal of Computer Science and Technology Vol. 3 Issue 3, Ver. 3
S.No. Research Topic Paper ID
   90 Principal Component Analysis with Mean and Entropy Values for Thermal Images Classification
Oky Dwi Nurhayati

Abstract

This paper is aimed to report the experiment in revealing the classification of randomized thermograms tabulated by the mean values and entropy values, with the thermal camera of Fluke as a tool for capturing images, after the mathematical method of measurement. Two statistical features namely mean and entropy combined with principal component analysis (PCA) have been applied in this research to classify the types of thermograms after the image preprocessing. The results show that the method is quite promising to distinguish the thermal images.
Full Paper

IJCST/33/3/
A-942
   91 Searching Key Words Through Outsourcing Linear Programing in Cloud Computing
D. Madhuri, K. Eswar, Dr. P. Harini

Abstract

Cloud computing refers that the delivery of computation and storage capacity as a service to a heterogeneous community of end-users. The name comes from the use of clouds as an high-level abstraction for the complex structure. Cloud computing assign the responsibility for doing services with a user’s data, software and computation over a network. It has considerable overlap with Software as a Service (SaaS). Treating the cloud as an intrinsically insecure computing platform from the viewpoint of the cloud customers, we must design mechanisms that not only protect sensitive information by enabling computations with encrypted data, but also protect customers from malicious behaviors by enabling the validation of the computation result. Such a mechanism of general secure computation outsourcing was recently shown to be feasible in theory, but to design mechanisms that are practically efficient remains a very challenging problem. In this paper we focus on linear programming to Search the Keywords (SKW) for security/efficiency tradeoff via higherlevel abstraction of LP computations than the general circuit representation. particular, by formulating private data owned by the customer for LP problem as a set of matrices and vectors, we are able to develop a set of efficient privacy-preserving problem transformation techniques, which allow customers to transform original LP problem into some arbitrary one while protecting sensitive input/output information. To validate the computation result, we further explore the fundamental duality theorem of LP computation and derive the necessary and sufficient conditions that correct result must satisfy. Such result verification mechanism is extremely efficient and incurs close-to-zero additional cost on both cloud server and customers. End users can access cloud based applications via web browser or a light weight desktop application or mobile application while the business software and data are stored on servers at a remote location. The Vindicator claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintainability, and enables Information Technology to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud computing dependent on sharing of resources to achieve coherence and the scale of economy is similar to a utility over a network.
Full Paper

IJCST/33/3/
A-943
   92 Security Issues in Cloud Computing: A Critical Analysis
Dr. G. Manoj Someswar, K. Vidya Shekar

Abstract

Cloud computing refers to applications and services that run on a distributed network using virtualized resources and accessed by common Internet protocols and networking standards. Cloud computing represents a paradigm shift driven by the increasing demand of Web based applications for elastic, scalable and efficient system architectures that can efficiently support their ever-growing data volume and large-scale data analysis. When it comes to Security, cloud really suffers a lot. The vendor for Cloud must make sure that the customer does not face any problem such as loss of data or data theft. There is also a possibility where a malicious user can penetrate the cloud by impersonating a legitimate user, there by infecting the entire cloud thus affecting many customers who are sharing the infected cloud.
Full Paper

IJCST/33/3/
A-944
   93 Blocking Misbehaving Users and Error Correction Method in Nymble Secure System
A. Thirupathaiah, K. Eswar, Dr. P. Harini

Abstract

Packet loss and end-to-end delay limit delay sensitive applications over the best effort packet switched networks such as the Internet. In the system Nymble, a system in which servers can “blacklist” misbehaving users, thereby blocking users without compromising their anonymity. Thus agnostic to different servers definitions of misbehavior where servers can blacklist users. But according to our calculation there is a LOP (loss of packets) and DON (delay of network) can happen while blacklisting the misbehavior users from all the corners of the websites. To overcome the loss we propose PDF (path diversity with forward error correction technique) system for delay sensitive applications over the Internet in which, disjoint paths from a sender to a receiver are created using a collection of relay nodes along with the Nymble approach. A scalable, heuristic scheme for selecting a redundant path between a sender and a receiver, and show that substantial reduction in packet loss can be achieved by dividing packets between the default path and the redundant path. NS simulations are used to verify the effectiveness of PDF system.
Full Paper

IJCST/33/3/
A-945
   94 Using Sink Initiated Multicast Protocol for Reducing the Data Loss by the Mobile Sink in Wireless Sensor Network
Dr. M. Subha, M. Vasanthapriya

Abstract

In the wireless sensor networks the recent work has shown that sink mobility along a constrained path can improve the energy efficiency. But due to the path constraints, a mobile sink with constant speed has limited communication time to collect data from the sensor nodes deployed randomly. This has significant challenges in jointly improving the amount of data collected and reducing the energy consumption. To overcome this problem, the paper proposes a novel data collection scheme, called the Maximum amount shortest path (MASP). It increases network throughput and also conserves energy by optimizing the assignment of sensor nodes. MASP is formulated as an integer linear programming problem and then solved with the help of a genetic algorithm and also developing a practical distributed approximate algorithm to solve the MASP problem. A (Sink –Initiated Multicast protocol) SIMP protocol is used to reduce the data loss during the communication phase by mobile sink.
Full Paper

IJCST/33/3/
A-946
   95 Case Study on Agile User Story Prioritization Process and Techniques
Saravana K. M, G. N. Basavaraj, Rajkumar, Dr. A. Kovalan

Abstract

Agile User Stories Prioritization Process Engineering (AUSPPE) is the process of option for the numerous software producers, whose realisms include extremely uncertain requirements, utilizing the new development technology and uninterrupted customer-centric requirements prioritization is crucial in successfully executing agile software development. Aim of Agile User Stories Prioritization engineering actions are contribute to business value that is described in terms of return-on-investment of software product and it is very essential for a software company to maximize value creation for the afforded investment. Requirements prioritization process is recognized as an extremely attribute but ambitious action in software product development. For a product to be successful, it is very important to identify the correct equalizer among the competing quality requirements. From the customers’ view, the action of continuous requirements prioritization creates the very core of today’s agile process. In this paper we deliver several case studies on Agile User Stories Prioritization (AUSP) methods to afford a conceptual model for understanding the interiteration prioritization approach in terms of inputs and outcomes, and finds problem and solutions pertinent to Agile User Stories Prioritization.
Full Paper

IJCST/33/3/
A-947
   96 Jail Break: The Dual Booting Smartphone Using Linux
Himanshu Srivastava, Vineet Garg, Piyush Kumar

Abstract

Cell phone are getting indispensable and all and sundry measure up to it with computer. One can get an advantage when integrate them as solitary one which make the amalgamated constantly upgraded e-device .One stand we can try to reach the next better thing why don’t user have and make it better. This paper puts forward the latest intelligence advancement thereby in the field of technology. The Proposed System is based upon the latest feature in the field of technology. As everybody knows that the Day to day life is being more computerized and the phenomenon which is created in this proposed System is totally to explore the Advancement of the Smart phones. In this Proposed System customers have the facility to use their smart phone just like The Laptop with different Operating System in built in it. So his best use of smart phone will be done by the users. It is also called the Jailbreak of the Smart phone. All of these phenomena generally are done by Linux or say the rearrangement of the coding of the smart phone operating system is being rearranged by Linux. So the proposed system in this paper shows that the android and Windows already have boot in one phone and just work like dual boot operating system as the Laptop have its competence.
Full Paper

IJCST/33/3/
A-948
   97 Fuzzy Information Retrieval from Mining Relational Database by Using Link Analysis Mining Methods
M. Sivanjaneyulu, A. Anuradha

Abstract

Link Analysis algorithms have been powering various search engines for efficient web information retrievals. Instead of web, in this paper we propose to use Link Analysis as an extension of correspondence analysis in a relational database for its ability to effectively discover relationships. Initially, a reduced, much smaller, Markov chaining containing only the elements of interest is extracted and refined by stochastic complementation. This reduced chain is then analyzed by projecting jointly the elements (entity relations in relational database) of interest in a kernel version of the diffusion-map subspace along with spectral clustering to visualize the results. Also applying this technique for fuzzy information retrievals can improve overall performance in a relational database. Experiments show the usefulness of the technique for extracting relationships in relational databases.
Full Paper

IJCST/33/3/
A-949
   98 Efficient Data Structures For Multi-Mode Dispatching
Ishan Jawa, Gurpreet Singh, Reena Sharma

Abstract

The problem of dispatching in object oriented languages is the problem of determining the most specialized method to invoke for calls at run-time. This can be a critical component of execution performance. A number of results, including [Muthukrishnan and Muller SODA’96, Ferragina and Muthukrishnan ESA’96, Al-strup et al. FOCS’98], have studied this problem and in particular provided various efficient data structures for the monomethod dispatching problem. A paper of Ferragina, Muthukrishnan and de Berg [STOC’99] addresses the multi-method dispatching problem. Our main result is a linear space data structure for binary dispatching that supports dispatching in logarithmic time. Using the same query time as Ferragina et al. this result improves the space bound with a logarithmic factor.
Full Paper

IJCST/33/3/
A-950
   99 Comparative Study of Intrusion Detection Techniques for Mobile Ad-Hoc Networks
Gulshan Singla, Hari Singh, Sukhvir Singh

Abstract

The rapid proliferation of wireless local area networks has changed the landscape of network security. The traditional way of protecting networks with firewalls and encryption software is no longer sufficient and effective. There are number of new techniques available for detecting the intruders in Wireless LAN. In this paper, we examine the comparative study of these techniques for detecting the vulnerabilities in wireless local area. This paper gives an overview of the existing intrusion detection techniques, including anomaly detection and misuse detection, and identifies techniques related to intrusion detection in Wireless LAN. Topics covered include Specification-Based Technique, Radio Frequency Fingerprinting (RFF) Based Technique, A Swarm-Intelligence-Based Technique, Immune System Technique, Adaptive Hierarchical Agent-Based Technique, Distributed Technique, Layered Technique, Statistical Approach, Battery- Based Technique, Honeypots Technique, Text Categorization Techniques And Dependency-Based Distributed Technique etc.
Full Paper

IJCST/33/3/
A-951
   100 Classification and Selection of Reusable Software Components Using Mutation Technique
R. Kamalraj, Dr. A. Rajiv Kannan, P. Ranjani

Abstract

In Software Engineering classifying reusable components for further use is an important activity to reduce the required efforts for constructing the total system. Existing methods are available to categorize the components as per their name and applications only. So in this paper we focused on ‘Mutation Technique’ from Genetic Algorithm approach to categorize the reusable software components as per their features. By using ‘Mutation Technique’ system development may get a different ‘Reusable Component Recording System’ to select the components to improve the quality of the system.
Full Paper

IJCST/33/3/
A-952
   101 Nymble: Blocking Misbehaving Users in Anonymizing Networks
D. Prasad, Afsha Jabeen, S. V. Hemanth, N. Thirupathi

Abstract

Anonymizing networks such as Tor allow users to access Internet services privately by using a series of routers to hide the client’s IP address from the server. The success of such networks, however, has been limited by users employing this anonymity for abusive purposes such as defacing popular Web sites. Web site administrators routinely rely on IP-address blocking for disabling access to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As a result, administrators block all known exit nodes of anonymizing networks, denying anonymous access to misbehaving and behaving users alike. To address this problem, we present Nymble, a system in which servers can “blacklist” misbehaving users, thereby blocking users without compromising their anonymity. Our system is thus agnostic to different servers’ definitions of misbehavior—servers can blacklist users for whatever reason, and the privacy of blacklisted users is maintained.
Full Paper

IJCST/33/3/
A-953
   102 Analysis of the Performance of Various Algorithms and Interestingness Measures in Association Rule Mining
Mukesh Sharma, Jyoti Choudhary, Gunjan Sharma

Abstract

Association rule mining is one of the most important and well researched techniques of data mining and was first introduced by Agrawal in 1993. It aims to find out interesting correlations, frequent pattern, casual structures or associations among sets of items in the transaction databases or other data repositories. Association rules are widely used in various areas such as market and risk management, telecommunication networks,inventory control and weather forecasting etc.So,it becomes important to choose the best algorithm to find the interesting rules. This paper discusses the various parameters for measuring the interestingness of association rules and also the various association algorithms.
Full Paper

IJCST/33/3/
A-954
   103 Woa Based Implementation of SOA
Ashish Verma, Ruchi Dave, Pooja Parnami

Abstract

Service-Oriented Architectures (SOA) is an Emerging approach that addresses the requirements of loosely coupled, standards-based, and protocol independent distributed computing. A Distributed Computing is always required a tight coupled relationship between all working services. Basically SOA provides a large number of objects that are working in modular services as reusable software components. Generally there are no any alternative for SOA to provide flexibility and reduction in the cost of services which are basically used in the IT field as reusable components. This functionality is provided by the Enterprise Service Bus (ESB) that is an integration platform that utilizes Web services standards to support a wide variety of communications patterns over multiple transport protocols and deliver value-added capabilities for SOA applications. But in this Context we are introducing the “WEB 2.0” which is used to provide reusable IT components dynamically. In this paper we will introduce the methodology of design WOA using the concept of SOA. The big picture will follow the existing SOA model. In particular, this WOA methodology comprises conceptual as well as realization issues and breaks WOA design down into three distinct phases.
Full Paper

IJCST/33/3/
A-955
   104 Development of FPGA Based Data Acquisition System
J. Vijaya Sree, K. V. V. S Reddy

Abstract

In the development of underwater weapons such as Torpedoes, it is very essential to evaluate the underwater Flow noise. The small amplitude noise can be sampled at high rates and data is to be stored in memory and retrieved for analysis. Programmable devices such as Field Programmable Logic Devices (FPGA) present an attractive option for hardware implementation of high speed data acquisition. Analog data to be acquired from the sensor is given to signal conditioning circuit which amplifies and filters the signal. This signal is given to Analog to Digital converter which is sampled at the rate of 3.6MSPS. These ADC’s are interfaced to FPGA in such a way that all ADC`s are simultaneously sampled. FPGA reads the ADC data after conversion and stored on to the Dual port Block RAM. Data acquired from the ADC`s are sent to USB FIFO to personal computer. Signal conditioning Circuits are designed and verified using Multi-Sim tool. Spartan – 3E FPGA is used as the main controller for which all modules are implemented in VHDL using Xilinx ISE Design Suite. Debugging the design and implementation of various modules are carried out in Chipscope pro tool.
Full Paper

IJCST/33/3/
A-956
   105 Security Attacks on Peer-to-Peer Networks
Deepika, Mandeep Kaur

Abstract

In this Paper we try to classify them as well as study the different possible defense mechanisms. In P2P system, which we deeply analyze, including simulating possible behaviors and reactions. Finally, we draw conclusions about what should be avoided when designing P2P applications and give a new possible approach to making a P2P application as resilient as possible to malicious users.
Full Paper

IJCST/33/3/
A-957
   106 “ Web as Data Source”
Lokhande Dheeraj Bhimrao, Rajesh V. Argiddi, S. S. Apte

Abstract

With the phenomenal growth of the WWW, rich data sources on many different subjects have become available online. Some of these sources store daily facts that often involve textual geographic descriptions. These descriptions can be perceived as indirectly geo referenced data – e.g., addresses, telephone numbers, zip codes and place names. Under this perspective, the Web becomes a large geospatial database, often providing up-to-date local or regional information. In this work we focus on using the Web as an important source of urban geographic information and propose to enhance urban Geographic Information Systems (GIS) using indirectly geo referenced data extracted from the Web. We describe an environment that allows the extraction of geospatial data from Web pages, converts them to XML format, and uploads the converted data into spatial databases for later use in urban GIS. The effectiveness of our approach is demonstrated by a real urban GIS application that uses street addresses as the basis for integrating data from different Web sources, combining these data with high-resolution imagery.
Full Paper

IJCST/33/3/
A-958
   107 Efficient RSS Feed Polling using Rolling Curl
Neeraj Kumar

Abstract

RSS feeds are still one of the most popular source of consuming information from Internet. Still there is no standard protocol defined for feed fetching. Most software rely on inefficient algorithms for polling and fetching feeds from Internet. In this paper, I am proposing an efficient method of feed polling based on Moving Average. Proposed method is more efficient than traditional approaches. It is easy to implement, wastes less CPU cycles and consumes much less bandwidth. When evaluated, proposed method was ~500% to ~700% faster than traditional sequential approach. For implementing this, I have used more than 15,000 real and unique RSS feeds from different sources like online newspapers, magazines blogs etc.
Full Paper

IJCST/33/3/
A-959
   108 Caching Strategies Based on in Sequence Thickness Opinion in Wireless Informal Networks
Chava Kalpana, Mohammad. Shareef

Abstract

In the case of small-sized caches, we aim to design a content replacement strategy that allows nodes to successfully store newly received information while maintaining the good performance of the content distribution system.We consider both cases of nodes with large- and small-sized caches. We address cooperative caching in wireless networks, where the nodes may be mobile
and exchange information in a peer-to-peer fashion. For largesized caches, we devise a strategy where nodes, independent of each other, decide whether to cache some content and for how long. Under both conditions, each node takes decisions according to its per- ception of what nearby users may store in their caches and with the aim of differentiating its own cache content from the other nodes’. The result is the creation of content diversity within the nodes neighborhood so that a requesting user likely finds the de- sired information nearby. We simulate our caching algorithms in different ad hoc network scenarios and compare them with other caching schemes, showing that our solution succeeds in creating the desired content diversity, thus leading to a resource-efficient information access.
Full Paper

IJCST/33/3/
A-960
   109 An Energy Efficient Approach in Heterogeneous WSN
P. Durga Prasad, R. Naveen

Abstract

Intrusion detection plays an important role in the area of security in WSN. Detection of any type of intruder is essential in case of WSN. WSN consumes a lot of energy to detect an intruder. Therefore we derive an algorithm for energy efficient external and internal intrusion detection. We also analyse the probability of detecting the intruder for heterogeneous WSN. This paper considers single sensing and multi sensing intruder detection models. It is found that our experimental results validate the theoretical results.
Full Paper

IJCST/33/3/
A-961
   110 Frame Adaptive Picture Steganography Based on LSB Identical Revisited
Puligundla Rajyalakshmi, R. Ravi Kumar

Abstract

Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. The least-significant-bit (LSB)- based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. In this paper, we expand the LSB matching revisited image steganog- raphy and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other
smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixelvalue- differencing-based approaches, while preserving higher visual quality of stego images at the same time.
Full Paper

IJCST/33/3/
A-962
   111 CBVR for Face Based Digital Signatures
M. Nagaraju, M. Thanoj, K. Lakshmi Tejaswini, A. Koteswaramma

Abstract

The characterization of a video segment by a digital signature is a fundamental task in video processing. It is necessary for video indexing and retrieval, copyright protection and other tasks. Semantic video signatures are those that are based on high-level content information rather than on low-level features of the video stream. The major advantage of such signatures is that they are highly invariant to nearly all types of distortion. A major semantic feature of a video is the appearance of specific persons in specific video frames. Because of the great amount of research that has been performed on the subject of face detection and recognition, the extraction of such information is generally tractable, or will be in the near future. We have developed a method that uses the pre-extracted output of face detection and recognition to perform fast semantic query-by-example retrieval of video segments. We also give the results of the experimental evaluation of our method on a database of real video. One advantage of our approach is that the evaluation of similarity is convolution-based, and is thus resistant to perturbations in the signature and independent of the exact boundaries of the query segment.
Full Paper

IJCST/33/3/
A-963
   112 A Powerful Protocol for Electronic – Voting using a HYBRID Crypto Realm
K. N. Sandhya Sarma, S. Umamaheswari

Abstract

This paper reinvigorates a powerful e-voting protocol which solves debated security issues. This scheme assures voter’s privacy & anonymity. We have reformed a protocol that integrates blind signature scheme, secret sharing technique and homomorphic encryption which ensures fair voting and eliminate criminal deception in voting. The initialization of this proposed protocol begins with identification of voters followed by authentication. With the inclusion of public proxy server we have successfully simulated our protocol which solves anonymity of the voters. The cryptographic approach securely transmits the vote of each voter in a high security lane. In the final phase of our protocol that begins with collection of votes; and by using homomorphic encryption we have secretly processed all the ballots in an encrypted form only. Due to this approach only the final computed result is revealed in encrypted form which is intelligible by using Secret sharing scheme.
Full Paper

IJCST/33/3/
A-964
   113 A New Profile Based Privacy Measure for Data Publishing
Dr. C.P.V.N.J. Mohan Rao, Kumar Vasantha, HarishBabu. Kalidasu

Abstract

The k-anonymity privacy requirement for publishing microdata requires that each equivalence class (i.e., a set of records that are indistinguishable from each other with respect to certain “identifying” attributes) contains at least k records. Recently, several authors have recognized that k-anonymity cannot prevent attribute disclosure. The notion of ‘diversity has been proposed to address this; l-diversity requires that each equivalence class has at least ‘well represented values for each sensitive attribute. In this paper, we follows that l-diversity has a number of limitations. In particular, it is neither necessary nor sufficient to prevent attribute disclosure. Motivated by these limitations, we worked on new notion of privacy called “closeness.”In this paper we are introducing performance based automatic data publishing to multiple users using User Profile Category(UPC) , this method enhances the present flexible privacy model called (n,t)-closeness. We discuss the rationale for using closeness as a privacy measure and illustrate its advantages through examples and experiments.
Full Paper

IJCST/33/3/
A-965
   114 Discovery of Direction-Finding Misconduct in MANETS Using 2ACK Scheme
S. Aarthi, D. Madhu Babu

Abstract

Mobile Ad hoc Networks (MANETs) operate on the basic underlying assumption that all participating nodes fully collaborate in selforganizing functions. However, performing network functions consumes energy and other resources. Therefore, some network nodes may decide against cooperating with others. Providing these selfish nodes, also termed misbehaving nodes, with an incentive to cooperate has been an active research area recently. In this paper, we propose two network-layer acknowledgment-based schemes, termed the TWOACK and the S-TWOACK schemes, which can be simply added-on to any source Steering protocol. The TWOACK scheme detects such misbehaving nodes, and then seeks to alleviate the problem by notifying the Steering protocol to avoid them in future routes. Details of the two schemes and our evaluation results based on simulations are presented in this paper. We have found that, in a network where up to 40% of the nodes may be misbehaving, the TWOACK scheme results in 20% improvement in packet delivery ratio, with a reasonable additional Steering overhead.
Full Paper

IJCST/33/3/
A-966
   115 A Method for Automatic accurate Image Registration Through Histogram-Based Image Segmentation and Translation (delineation)
Ch. Premkumar, S. M. Afroj

Abstract

Automatic image registration is still present challenge in several fields like computer vision and remote sensing applications. Image registration is the process of transforming different sets of data into one coordinate system. In this paper, we propose a method for Automatic accurate Image Registration through Histogram- Based Image Segmentation and translation (delineation) using Wiener filtering which allows for a more detailed histogrambased segmentation, rather than the traditional methods, and consequently to an accurate image registration. Proposed system is able to estimate the rotation and/or translation between two images— which may be multitemporal or multisensor—with small differences in the spectral content. The first dataset consists in a photograph and a rotated and shifted version of the same photograph, with different levels of added noise. This allows for the registration of pairs of images with differences in rotation and translation. Various applications of image registration are target recognition, monitoring global land usage using satellite images, matching stereo images to recover shape for navigation, and aligning images from different medical modalities for diagnosis.
Full Paper

IJCST/33/3/
A-967
   116 An Efficient Multiple Description Coding for Video Streaming
Redya Jadav, Injam Rakesh

Abstract

The number of users is rapidly expanding and bandwidth-hungry services, such as video streaming, are becoming more and more popular by the day. Multiple description coding (MDC) can be used to address this bandwidth heterogeneity issue. The goal of MDC is to create several independent descriptions that can contribute to one or more characteristics of video: spatial or temporal resolution, signal-to-noise ratio, frequency content. An important but challenging problem for MDC video multicast is how to assign bandwidth to each description in order to maximize overall user satisfaction. In this paper, we propose to implement an efficient heuristic called simulated annealing for MDC bandwidth assignment to assign bandwidth to each description given the distribution of user bandwidth requirement. Main objective is to maximize user bandwidth experience by taking into account the encoding inefficiency due to MDC. If the description number is larger than or equal to a certain threshold, then by implementing proposed technique we can achieve maximum user satisfaction, i.e., meeting all the bandwidth requirements.
Full Paper

IJCST/33/3/
A-968
   117 Top-k keyword search using Skyline Sweeping and Improved Rank Function
P. Vardhani, S. Uma Maheswara Rao, N. Tulasi Raju

Abstract

Searching keywords in databases is complex task than search in files. Information Retrieval (IR) process search keywords from text files and it is very important that queering keyword to the relational databases. Generally to retrieve data from relational database Structure Query Language(SQL) can be used to find relevant records from the database. There is natural demand for relation database to support effective and efficient IR Style Keyword queries. This paper describes problem of supporting effective and efficient top-k keyword search in relational databases also describe the frame word which takes keywords and K as inputs and generates top-k relevant records .The results of implemented system with Skyline Sweeping (S.S) Algorithm shows that it is one effective and efficient style of keyword search.
Full Paper

IJCST/33/3/
A-969
   118 Spectrum Based Detection Scheme on Worm
Susarla Valli Kameswari, A. S. K. Maha Lakshmi

Abstract

There are several worm attacks in the recent years, this leads to an essentiality of producing new detection technique for worm attacks. In this paper we present a range based smart worm detection scheme, this is based on the idea of detection of worm in the frequency domain. This scheme uses the power spectral density of the scan traffic volume and its corresponding flatness Measure to distinguish the smart worm traffic from background traffic. This scheme showed better results against the smart worms and also for the c-worm detection. Motivated by our observations, we design automatic detection of C-worm and specifying scope for the entire network and the scope for the particular region. We are proposed a method that scans the entire network globally and defines the regions to model and detect the C Worm and detect the different worms over the internet and the proposed system automatically applied on the worm detection according to the regions of over the internet without disturbing their real work. The proposed method is optimized method to differentiate the region results over the entire network maintained by the system. Our scheme uses the Power Spectral Density (PSD) distribution of the scan traffic volume and its corresponding Spectral Flatness Measure (SFM) to distinguish the C-Worm traffic from background traffic. Using a comprehensive set of detection metrics and real-world traces as background traffic, we conduct extensive performance evaluations on our proposed range -based detection scheme. The performance data clearly demonstrates that our scheme can effectively detect the C-Worm propagation.
Full Paper

IJCST/33/3/
A-970
   119 Using Grouping and KNN Search Algorithm for High Length Catalo
B. Venkateswarulu, Y. Vinay Kumar

Abstract

We propose a new group-adaptive space bound based on separating hyper plane boundaries of Verona groups to complement our group based catalog. This bound enables efficient spatial filtering, with a relatively small pre-processing storage overhead and is applicable to Euclidean and Mahalanobis similarity measures. Experiments in exact nearest-neighbour set retrieval, conducted on real data sets, show that our cataloguing method is scalable with data set size and data lengthily and outperforms several recently proposed cataloges. Consider approach for likeness search in interrelated, high-length data sets, which are derived within a grouping framework. They note that Catalog by “Vector Approximation” (VA-File), Which was proposed as a technique to combat the “irritation of Lengthily,” employs scalar quantization, and hence necessarily ignores dependencies across dimensions, which represents a source of sub-optimality? Grouping, on the other hand, exploits inter-length correlations and is thus a more compact representation of the data set. However, existing methods to prune irrelevant groups are based on bounding hyper-spheres and/or bounding rectangles, whose lack of tightness compromises their efficiency in exact nearest neighbour search. They propose a new groupadaptive space bound based on separating hyper-plane boundaries of Verona groups to complement their group based catalog.
Full Paper

IJCST/33/3/
A-971
   120 Efficiently Searching the Similar Content by Using Content Based Image Retrieval System
D. V. T. Dharmajee Rao, Ch. Ramya

Abstract

The Content Based Image Retrieval (CBIR) is one of the important topic in research areas of the digital image processing. Visual information systems are radically different from conventional information systems. Many novel issues need to be addressed. A visual information system should be capable of providing access to the content of pictures and videos. Where symbolic and numerical information are identical in content and form, pictures require a delicate treatment to approach their content. To search and retrieve items on the basis of their pictorial content requires a new visual way of specifying the query, new indices to order the data and new ways to establish similarity between the query and the target. A number of keyword-based general WWW search engines allow work with images (HotBot (http://hotbot.lycos.com/),and NBCi (http://www.nci.com / )). A number of other general search engines are more specially for images, such as Yahoo!’s Image Surfer (http://isurf.yahoo.com/) other multimedia searcher of Lycos (http://multimedia.lycos.com/), but they are still only keyword based. This paper aims to introduce the problems and challenges concerned with the design and the creation of CBIR systems, which is based on a free hand sketch (Sketch based image retrieval – SBIR). With the help of the existing methods, describe a possible solution how to design and implement a task spesific descriptor, which can handle the informational gap between a sketch and a colored image, making an opportunity for the efficient search hereby. The used descriptor is constructed after such special sequence of preprocessing steps that the transformed full color image and the sketch can be compared. We have studied EHD, HOG and SIFT. Experimental results on two sample databases showed good results. Overall, the results show that the sketch based system allows users an intuitive access to search-tools.
Full Paper

IJCST/33/3/
A-972
   121 Efficient Data Integration in Finding Ailment-Treatment Relation
A. Nageswara Rao, G. Venu Gopal, K. V. R. Chandra Mouli

Abstract

To automatically analyze medical narratives, one needs linguistic and conceptual resources which support capturing of important information from texts and its representation in a structured way. Thus the conceptual structures encoding domain concepts and relations are crucial for the development of reliable and highperformance information extraction system. Machine Intelligence plays a crucial role in the design of expert systems in medical diagnosis. In India most of the people suffering from some sort of diseases like asthma, diabetics, cancer and many more. The Machine Learning field has gained its thrust in almost any domain of research and just recently has become a reliable tool in the medical domain. The experiential domain of automatic learning is used in tasks such as medical decision support, medical imaging, protein-protein interaction, extraction of medical knowledge, and for overall patient management care. ML is envisioned as a tool by which computer-based systems can be integrated in the healthcare field in order to get a better, well-organized medical care. It describes a ML-based methodology for building an application that is capable of identifying and disseminating healthcare information. It extracts sentences from published medical papers that mention diseases and treatments, and identifies semantic relations that exist between diseases and treatments. Our evaluation results for these tasks show that the proposed methodology obtains reliable outcomes that could be integrated in an application to be used in the medical care domain. The potential value of this paper stands in the ML settings that we propose and in the fact that we outperform previous results on the same data set.
Full Paper

IJCST/33/3/
A-973
   122 Ensuring Effective Third Party Auditing (TPA) for Data Storage Security in Cloud Computing
Moshe Dayan. Sirapangi, G. B. V. Padmanadh

Abstract

Cloud computing is an internet based computing which enables sharing of services. Many users place their data in the cloud, so correctness of data and security is a prime concern. This work studies the problem of ensuring the integrity and security of data storage in Cloud Computing. Security in cloud is achieved by signing the data block before sending to the cloud. Signing is performed using BLS algorithm which is more secure compared to other algorithms. This paper, proposes an effective and flexible distributed scheme with explicit dynamic data support to ensure the correctness of users data in the cloud. We rely on erasure correcting code in the file distribution preparation to provide redundancies and guarantee the data dependability. This construction drastically reduces the communication and storage overhead as compared to the traditional replication based file distribution techniques. By utilizing the homomorphic token with distributed verification of erasure coded data, our scheme achieves the storage correctness insurance as well as data error localization, whenever data corruption has been detected during the storage correctness verification, our scheme can almost guarantee the simultaneous localization of data errors, i.e., the identification of the misbehaving server(s). The new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete, append and insert.
Full Paper

IJCST/33/3/
A-974
   123 An Industry Reproduction for Cloud Computing Based on Detach Encryption and Decryption Tune
G. Rajesh, Y. Chittibabu, Dr. P. Harini

Abstract

Cloud computing can be defined as it is a new tune, which are the collection of technologies and a means of supporting the use of large scale Internet tunes for the remote applications with good quality of tune (QoS) levelsThis paper mainly proposes the core concept of secured cloud computing i.e. it suggests the cloud computing based on detach encryption and decryption tunes from
the storage tune. This paper introduces a user interface .One tune provider operates the encryption and decryption system while other providers operate the storage Even for security and data integrity we supposed to implement the One Time Password Authentication (OTP) including email updates and application systems, according to the core concept of the proposed computing copy. This Project usually store data in internal storage andinstall firewalls to protect against intruders to access the data.They also standardize data access procedures to prevent insidersto disclose the information without permission. In cloudcomputing, the data will be stored in storage provided by tuneproviders. Tune providers must have a viable way to protecttheir clients’ data, especially to prevent the data from disclosureby unauthorized insiders. Storing the data in encrypted form is acommon method of information privacy protection. If a cloudsystem is responsible for both tasks on storage andencryption/decryption of data, the system administrators maysimultaneously obtain encrypted data and decryption keys. Thisallows them to access information without authorization and thusposes a risk to information privacy. This study proposes aindustrycopy for cloud computing based on the concept ofseparating the encryption and decryption tune from thestorage tune. Furthermore, the party responsible for the datastorage system must not store data in plaintext, and the partyresponsible for data encryption and decryption must delete alldata upon the computation on encryption or decryption iscomplete. A CRM (Customer Relationship Management) tuneis described in this paper as an example to illustrate the proposedindustrycopy. The exemplary tune utilizes three cloudsystems, including an encryption and decryption system, astorage system, and a CRM application system. One tuneprovider operates the encryption and decryption system whileother providers operate the storage and application systems,according to the core concept of the proposed industrycopy.This paper further includes suggestions for a multiparty TuneLevel Agreement (SLA) suitable for use in the proposed
industrycopy.
Full Paper

IJCST/33/3/
A-975
   124 Energy-Efficient Strategy For Cooperative Multi-Channel MAC Protocols
Dr. S. M. Afroz, N. Geetha Rani, V. Emanuel Raju

Abstract

Medium access control (MAC) protocols have been studied under different contexts for several years now. In all these MAC protocols, nodes make independent decisions on when to transmit a packet and when to back-off from transmission. In this paper, we introduce the notion of node cooperation into MAC protocols. Cooperation adds a new degree of freedom which has not been explored before. Our join Distributed Information SHaring (DISH) is a new join approach to designing Different channel MAC protocols. It aids nodes in their decision making processes by compensating for their missing information via information sharing through other neighboring nodes. This approach was recently shown to significantly boost the throughput of Different-channel MAC protocols. However, a critical issue for ad hoc communication devices, i.e., power efficiency, has yet to be addressed. In this paper, we address this issue by developing simple solutions which We compare five protocols with respect to the strategy and identify Unselfish DISH to be the right choice in general: (1) Conserves 40- 80% of power, (2) Maintains the throughput advantage gained from the DISH approach,(3) More than doubles the price efficiency compared to protocols without applying the strategy. On the other hand, our study shows that in-situ power conscious DISH is suitable only in certain limited scenarios. asynchronous Different-channel MAC protocol (CAM-MAC) is extremely simple to implement and, unlike other Different-channel MAC protocols, is naturally asynchronous.
Full Paper

IJCST/33/3/
A-976
   125 Effective Restricting of Misbehaving Users Among Anonymous Users in a Network
M. Mastan, S. Jagadeswari

Abstract

In this work, we propose a new platform to enable service providers, such as web site operators, on the Internet to block past abusive users of anonymizing networks (for example, Tor) from further misbehaviour, without compromising their privacy, and while preserving the privacy of all of the non-abusive users. Our system provides a privacy-preserving analog of IP address banning, and is modelled after the well-known Nymble system. Nymble is a system that provides a blocking mechanism to a server to protect it from misbehaving users connecting through anonymizing networks such as Tor. Anonymous networks allow anyone to visit the public areas of the network. Here users access the Internet services through a series of routers. , this hides the user’s identities and IP address from the server. This may be an advantage for the misbehaving users to destroy popular websites. To avoid such activities, servers may try to block the misbehaving user, but it is not possible in case of anonymous networks. In such cases, if the abuser routes through an anonymizing network, administrators block all known exit nodes of anonymizing networks, denying anonymous access to misbehaving and behaving users. To overcome this problem, a nymble system is designed in which servers can blacklist the misbehaving users without compromising their anonymity. This paper explains the idea that the different service providers have different blacklisting policies. For example, Wikipedia might want to block a user one day for the first misbehaviour, one week for the second one, etc. In order to do this, we have to develop a dynamic link ability window whose length can be increased exponentially. Thus, at the start of each linkability window, all service providers must reset their blacklists and forgive all prior misbehavior.
Full Paper

IJCST/33/3/
A-977
   126 A Lightweight and Non-Path-Based Mutual Anonymity Protocol for Peer- to-Peer Systems
B Krishnaveni Reddy, Md Sarfraz Ahmed

Abstract

An anonymous P2P communication system is a peer-to-peer distributed application in which the nodes or participants are anonymous or [pseudonymous. Anonymity of participants is usually achieved by special routing overlay networks that hide the physical location of each node from other participants. Existing anonymity approaches are mainly path based: peers have to preconstruct an anonymous path before transmission. The overhead of maintaining and updating such paths is significantly high. In highly dynamic P2P systems, when a chosen peer leaves, the whole path fails. Unfortunately, such a failure is often difficult to be known by the initiator. Therefore, a “blindly-assigned” path is very unreliable, and users have to frequently probe the path and retransmit messages. To address the above issues, we propose a non-path-based anonymous P2P protocol called Rumor Riding (RR).It is a lightweight and non-path-based mutual anonymity protocol for decentralized P2P systems. Employing a random walk mechanism, RR takes advantage of lower overhead by mainly using the symmetric cryptographic algorithm.
Full Paper

IJCST/33/3/
A-978
   127 JR Positive Comfortable Destroy to Prevent Collusive Piracy in P2P File Sharing
B. Venkateswarulu, N. Sandhya Rani

Abstract

Today’s peer-to-peer (P2P) networks are grossly abused by Illegal distributions of music, games, video streams, and popular software. These abuses have resulted in heavy financial loss in media and avoidance industry. Collusive piracy is the main source of intellectual property violations within the boundary of P2P networks. This problem is resulted from paid clients (colluders) illegally sharing repressive avoidance files with unpaid clients (pirates). Such an on-line piracy has hindered the use of open P2P networks for commercial avoidance delivery. We propose a upbeat avoidanceing scheme to stop colluders and pirates from working together in alleged repression infringements in P2P file sharing. The basic idea is to detect pirates with identity based signatures and time-stamped tokens. Then we stop collusive piracy without hurting legitimate P2P clients. We developed a new peer authorization protocol (PAP) to distinguish pirates from legitimate clients. Detected pirates will receive avoidances chunks in repeated attempts. A reputationbased mechanism is developed to detect colluders. The system does not slow down legal download from paid clients. The pirates are severely penalized with no chance to download successfully in finite time. Based on simulation results, we find 99.9% success rate in preventing piracy on file-level hashing networks like Gnutella, KaZaA, Area, LimeWire, etc. Our protection scheme achieved 85- 98% avoidance rate on part-level hashing networks like eMuel, Shareaz, eDonkey, Morpheus, etc. Our new scheme enables P2P technology for building a new generation of avoidance delivery networks (CDNs). These P2P-based CDNs provide faster delivery speed, higher avoidance availability, and cost-effectiveness than using conventional CDNs built with huge network of surrogate servers.
Full Paper

IJCST/33/3/
A-979
   128 Matrix Factorization of Image for Multimedia Mining
Prasad. G. K, P. Nanna Babu

Abstract

Matrix factorization techniques have been frequently applied in information retrieval, computer vision and pattern recognition. Among them, Non-negative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts-based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low dimensional manifold embedded in a high dimensional ambient space. One hopes then to find a compact representation which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. This Paper presents novel algorithm, called Graph Regularized Non-negative Matrix Factorization of image for multimedia Mining (GNMFMM), In GNMFMM, an affinity graph is constructed to encode the image for multimedia, and seek a matrix factorization which respects the graph structure.
Full Paper

IJCST/33/3/
A-980
   129 Software Defect Prediction Using One Pass Data Mining Algorithm
Subrahmanyam. G. K, P. Nanna Babu

Abstract

Software Testing Consumes major percentage of project cost, so researchers focus is “How to minimizes cost of testing in order to minimize the cost of the project”. The Software defect prediction is a method which predict defect from historical database. Data mining Techniques are used to predict Software defects from historical databases. This paper describes frame work to produce software defects from the historical database and also present one pass data mining algorithm used to find rules to predict software defects. The experimental results shows that, one pass algorithm generate rules for software defect prediction with consider amount of time and with better performance.
Full Paper

IJCST/33/3/
A-981
   130 Fastest Searching Mechanism of Bio-Medical Database
K. Subba Rao, M. Raja Babu

Abstract

Biologists, chemists, medical and health scientists are used to searching their domain literature – such as PubMed – using a keyword search interface. Currently, in an exploratory scenario where the user tries to find citations relevant to her line of research and hence not known a priori, she submits an initially broad keyword-based query that typically returns a large number of results. We demonstrate the BioNav system, a novel search interface for biomedical databases, such as PubMed. BioNav enables users to navigate large number of query results by categorizing them using MeSH; a comprehensive concept hierarchy used by PubMed. Once the query results are organized into a navigation tree, BioNav reveals only a small subset of the concept nodes at each step, selected such that the expected user navigation cost is minimized. In contrast, previous works expand the hierarchy in a predefined static manner, without navigation cost modeling. BioNav is available at http://db.cse.buffalo.edu/bionav
Full Paper

IJCST/33/3/
A-982
   131 Diminish additive arc in Steganography Using Clause-Outline Policy
SK. Karimulla, G. Sasibhusana Rao

Abstract

Most realistic steganographic algorithms for experimental covers embed messages by minimizing a sum of per-pixel distortions. Current near-optimal policies for this minimization problem are limited to a binary embedding operation. In this paper, we extend this work to embedding operations of larger cardinality. The need for embedding changes of larger amplitude and the merit of this construction are confirmed experimentally by implementing an adaptive embedding algorithm for digital images and comparing its security to other schemes. This paper proposes a complete practical method for minimizing additive distortion in steganography with general embedding operation. Let every possible value of every stage element be assigned a scalar expressing the distortion of an embedding change done by replacing the cover element by this value. The total distortion is assumed to be a sum of per-element distortions. Both the payload-limited sender (minimizing the total distortion while embedding a fixed payload) and the distortionlimited sender (maximizing the payload while introducing a fixed total distortion) are considered. Without any loss of performance, the nonbinary case is decomposed into several binary cases by replacing individual bits in cover elements. The binary case is approached using a novel Clause-coding scheme based on dual convolution policy’s equipped with the Viterbi algorithm. Most current coding schemes used in steganography (matrix embedding, wet paper policy’s, etc.) and many new ones can be implemented using this outline. We report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel. Practical merit of this approach is validated by constructing and testing adaptive embedding schemes for digital images in raster and transform domains.
Full Paper

IJCST/33/3/
A-983
   132 A Fraud Detection based Online Test and Behavior Identification Implementing Visualization Techniques
S. James, G. Subba Lakshmi, Ch. Raja Jacob

Abstract

Now a day’s online exam has become one of the prominent and most important aspects of our lives, but there is no guaranty of genuinity of the result in the online examination processes. In many of online examinations the location of the proctor varies from the location of the examinee, there is no guaranty in the authorization of users to the examination and no guaranty in trusting the examination center head that monitors the students during the examination process. Due to the increase in the distance, chances of doing malpractice and misbehaving during the examination increases. To avoid such situations, the examinee has to be constantly monitored and should able to stop the examination based on learner’s behavior during the examination. Many techniques were proposed for providing security during conduct of exams. This paper studies various authorization and authentication techniques namely unimodal, multimodal, hardware interaction and data visualization techniques. The paper also proposes a Fraud Detection based Online Test [FDOT] and Behavior identification through Visualization Techniques [BIVT] that avoids and performs more effectively compared with the existing systems.
Full Paper

IJCST/33/3/
A-984
   133 Improving Data Quality in Applications of Dynamic Forms
Kotakonda. Madhu Babu, G. Subba Lakshmi, Ch. Raja Jacob

Abstract

Quality of data is a major problem in high dimensional and modern databases. Especially in data entry forms present the initial and arguable an efficient opportunity for identifying and mitigating the errors, but there are many methods and researches which are conducted in order to improve the data quality during the entry of data into the forms. In this paper we propose an end to end system for designing of forms, entry and providing the quality assurance to the data during the submission of forms. This system learns a probabilistic model over the question of the form. At every step of the form entry this model is implemented to provide the better quality assurance compare with the earlier methodologies. Before the entry, it induces and identifies a form layout that captures the important data values of a form. Before entry, it induces a form layout that captures the most important data values of a form instance as quickly as possible and reduces the complexity of error-prone questions. During entry, it dynamically adapts the form to the values being entered by providing real-time interface feedback, reasking questions with dubious responses, and simplifying questions by reformulating them. After entry, it revisits question responses that it deems likely to have been entered incorrectly by reasking the question or a reformulation thereof. We evaluate these components of USHER using two real-world data sets. Our results demonstrate that USHER can improve data quality considerably at a reduced cost when compared to current practice.
Full Paper

IJCST/33/3/
A-985
   134 Online Incursion Aware Aggregation With Generative Facts Issue Modeling
Bejjam Naresh, R. Naveen

Abstract

Meta-alerts is the basis for reporting to security experts or for communication within a distributed intrusion detection system. . With three benchmark data sets, we demonstrate that it is possible to achieve reduction rates of up to 99.96 percent while the number of missing meta-alerts is extremely low. In addition, meta-alerts are generated with a delay of typically only a few seconds after observing the first alert belonging to a new attack instance. Metaalerts can be generated for the clusters that contain all the relevant information whereas the amount of data (i.e., alerts) can be reduced substantially. Intrusion detection can be used to identify the types of hackers attempting to tress pass into the system, thus we use the concept of alerts to cluster the types of attacks and the further counter measures, by using the concept of firewalls. . In addition, even low rates of false alerts could easily result in a high total number of false alerts if thousands of network packets or log file entries are inspected.
Full Paper

IJCST/33/3/
A-986
   135 A Integrated Approach to Optimizing Presentation in Networks Portion Various Flows
Sandhya Kondapureddy, Rajeshwar Singh Bondili

Abstract

In this work, we develop the mathematical framework and novel design methodologies needed to support such heterogeneous requirements and propose provably optimal network algorithms that account for the multilevel interactions between the flows. To that end, we first formulate a network optimization problem that incorporates the above throughput and service prioritization requirements of the two traffic typesWe note that the coexistence of such diverse flows creates complex interac- tions at multiple levels (e.g., flow and packet levels), which prevent the use of earlier design approaches that dominantly assume homogeneous traffic.We study the optimal control of communication networks in the presence of heterogeneous traffic require- ments. Specifically, we distinguish the flows into two crucial classes: inelastic for modeling high-priority, delay-sensitive, and fixedthroughput applications; and elastic for modeling low-priority, delay-tolerant, and throughput-greedy applications.. We, then develop a distributed joint load-balancing and congestion control algorithm that achieves the dual goal of maximizing theaggregate utility gained by the elastic flows while satisfying the fixed throughput and prioritization requirements of the inelastic flows. Next, we extend our joint algorithm in two ways to further improve its performance: in delay through a virtual queue implementation with minimal throughput degradation and in utilization by allowing for dynamic multipath routing for elastic flows. A unique characteristic of our proposed dynamic routing solution is the novel two-stage queueing architecture it introduces to satisfy the service prioritization requirement.
Full Paper

IJCST/33/3/
A-987
   136 Rapid Imitation of Services Availability in Mesh Network With Active Path Restitution
V. Suresh, R. Naveen

Abstract

The simulated model uses “failure equivalence groups,” with finite/infinite sources of failure events and finite/infinite pools of repair personnel, to facilitate the modeling of bidirectional link failures, multiple in-series link cuts, optical amplifier failures along links, node failures, and more general geographically distributed failure scenarios. A fast simulation technique based on importance sampling is developed for the analysis of path service availability in mesh networks with dynamic path restoration. The method combines the simulation of the path rerouting algorithm with a “Dynamic Path Failure Importance Sampling” (DPFS) scheme to estimate path availabilities efficiently. In DPFS, the failure rates of network elements are biased at increased rates until path failures are observed under rerouting. The analysis of a large mesh network example demonstrates the practicality of the technique.
Full Paper

IJCST/33/3/
A-988