IJCST Logo

 

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY (IJCST)-VOL IV ISSUE II, VER. 3, APR. TO JUNE, 2013


International Journal of Computer Science and Technology Vol. 4 Issue 2, Ver. 3
S.No. Research Topic Paper ID
87 Simulation Based Performance Analysis of IEEE 802.11b Based MANETs

Hayder M. Jassim, Ghassan A. QasMarrogy, Jia Uddin, Emmanuel A. Oyekanlu

Abstract

The IEEE 802.11b standard is a protocol for proper interconnection of data communication tool using the wireless transmission in the Local Area Network (LAN). It includes the physical and the Media Access Control (MAC) layers of the ISO seven-layer network model. Therefore, investigating its performance with different ad-hoc network environment is necessary. Previously, a number of studies investigate the performance of various routing protocols using different evaluation methods with simulators, resulting to different outcomes. Thus, there is a need to expand the spectrum to take into consideration the effects of file size, numbers of nodes and mobility that were neglected in a specific environment and compare them with real life scenario MANET network. This paper present analysis results for the performance evaluation of transmitting FTP file with AODV routing protocol in OPNET modeler and real life network for several scenarios with 802.11b wireless networking transmission method. Simulation results demonstrate that the scenarios with small amount of fixed and mobile nodes shows better performance. The overall network performance automatically degrades with the increment of number of nodes as well as increment of packet size.
Full Paper

IJCST/42/3/B-1506
88 An Efficient Web Based Home Appliance Control System

S. Arefin, S. M. T. Reza, M. A. Rahman, Jia Uddin

Abstract

In this paper an efficient web based home appliance control system (WBHACS) is proposed for the peoples who stay out of home maximum of the time for their offices, business and for other reasons. They always remain tensed whether their homes are secured in their absence or either anyone trying to break the security of their homes. The working peoples can avail this WBHACS conveniently from any part of the world. If any stranger tries to break the security of the home and get into the home, the owner is uncomfortable with that and then the proposed WBHACS can be availed. In emergency the stranger can be identified with the close circuit camera integrated with the system. In addition, if the owners seek for external help like police station or from relatives then it is possible to do with the WBHACS. There is an emergency alarm system also included along with the proposed model to alert the neighbors. It uses the Wi-Fi system to monitor the home appliances via home personal computer from a distance. For successfully implement this proposed model in experimental setup, we use PIC 16F877A, IR-TSOP modules and Image capture module. The Visual basic is used for making attractive graphical user interface (GUI). In order to program on the PIC we use micro C as a programming language.
Full Paper

IJCST/42/3/B-1507
89 Visualization of Data for Host-Based Anomalous Behavior Detection in Computer Forensics Analysis Using Self Organizing Map

Sushil Kumar Chavhan, Smita M. Nirkhi, Dr. R.V. Dharaskar

Abstract

With the rapidly increasing complexity of computer systems and various media devices and the insufficient attack analysis techniques there is need of improvement of computer forensics analysis techniques. Although many of forensics tools and techniques help in analysis process till forensics analysis process become difficult problem. Here we present an anomalous behavior detection system using self organizing map. Using that system we handle the large volume of data efficiently. Self organizing map has high potential to map high dimensional data also it preserves the topology of data. This technique involves assigning particular values to same data and analyzed visualized pattern with the help of self organizing map. We present result based on implemented system which help to improve investigation.
Full Paper

IJCST/42/3/B-1508
90 An Effective Approach Towards Topology Control in Wireless Ad-Hoc Sensor Networks

M.Sai Lakshmi, Dr. R. V. Krishnaiah

Abstract

In recent times, for improving the performance of information transmission operating over the ever-challenging wireless medium, cooperative wireless communication has received tremendous interests as an untapped means. For multiple antenna systems, cooperative communication has emerged as a new dimension of diversity to emulate the strategies designed, due to dimension, expenditure, or hardware margins a wireless mobile device may not be able to support multiple transmit antennas. Most previous works on cooperative communications are paying attention on linklevel physical layer issues. Consequently, on network-level upper layer issues have the impacts of cooperative communications, such as topology organize, routing and network ability, are mainly unnoticed. In this paper, to improve the network capacity in mobile ad hoc networks (MANETs) by jointly considering both upper layer network capacity and physical layer cooperative communications a Capacity-Optimized Cooperative (COCO) topology control scheme is proposed. In order to diminish the cost of dispersed algorithms topology control is a method used in distributed computing to modify the underlying network. It is a fundamental method in dispersed algorithm. The main aim of topology control in this domain is to accumulate energy, diminish interference between nodes and widen lifetime of the network. Through simulations, we have proposed topology control scheme which can substantially improve the network capacity in MANETs with cooperative communications and we show that physical layer cooperative communications have significant impacts on the network capacity.
Full Paper

IJCST/42/3/B-1509
91 Data Security Technique in Cloud Storage

Rohini G.Khalkar, Dr. S.H.Patil

Abstract

In recent development of cloud computing, large number of enterprises can outsource their sensitive information for sharing in a cloud. To keep the shared information confidential against untrusted Cloud Service Providers (CSPs), a natural method is to store only the encrypted data in a cloud. The key issues in this approach include establishing access control for the encrypted information, and revoking the access rights from users when they are no longer authorized to access the encrypted data. This paper solves both the problems by combining a ciphertext-policy attribute-based encryption (CP-ABE) system and hierarchical identity-based encryption (HIBE) system, to provide both fine -grained access control, full delegation and high performance. HABE scheme generate encryption key i.e Master key Msk based on the hierarchical attributes so it will provide more security for cloud storage.
Full Paper

IJCST/42/3/B-1510
92 Implementation of Network Forensics Mechanism for Web Attack Detection

Sudhakar Parate, Smita M. Nirkhi, Dr. R.V.Dharaskar

Abstract

Network forensics is the most significant technology to investigate different types of networking attack. Network forensics will help to capture, copy, transfer, analysis and investigation purpose. The most of the web application can easily attack by the hackers even when antivirus, firewall are exist in the system. This system used to identify different types of web attacks by using Kddcup 99 and NSL KDD dataset as a evidence. These digital evidence help in the course of the investigation phase to prepare the next steps. The evidences are just like a log files, that log files take as input and pre-process before training the neural network. Backpropogation algorithm is used for the training neural network and attack detection with the help of different dataset. Finally system generates forensic report, which will help for aiding an investigation.
Full Paper

IJCST/42/3/B-1511
93 Detecting Reliable Software Using SPRT: An Order Statistics Approach

D. Haritha, Dr. R. Satya Prasad

Abstract

To assess the software reliability by statistical means yields efficient results. In this paper, for an effective monitoring of failure process we have opted Sequential Probability Ratio Test (SPRT) over the time between every rth failure (r is a natural number >=2) instead of inter-failure times. This paper projects a controlling framework based on order statistics of the cumulative quantity between observations of time domain failure data using mean value function of Logarithmic Poisson Execution Time Model (LPETM). The two unknown parameters can be estimated using the Maximum Likelihood Estimation (MLE).
Full Paper

IJCST/42/3/B-1512
94 A Survey on Resourceful Estimation of Wireless Sensor Networks

Mahendrakar Yashoda Bai, Dr. R. V. Krishnaiah

Abstract

The manufacturing of small and low cost sensors became technically and economically feasible due to recent technological advances. The sensing electronics measure ambient conditions related to the environment surrounding the sensor and convert them into an electric signal. To considerately pass their data through the network to a main location a wireless sensor network (WSN) consists of spatially dispersed autonomous sensors to supervise physical or environmental circumstances. Some or more important data’s may be dropped if congestion occurs in the Wireless Network. By addressing differentiated delivery requirements we handle this problem in our paper. Based on the congested areas of a network and data priority a class of algorithms is proposed in our paper to enforce differentiated routing. Using simple forwarding rules a basic protocol, called Congestion-Aware Routing (CAR) discovers the congested zone of the network that exists between high-priority data sources and the data sink and dedicates this part of the network to forwarding mainly high-priority traffic. It is unsuitable for highly mobile data sources since CAR requires some overhead for establishing the high-priority routing zone. For forming high priority paths on the fly for each burst of data, we define Mac-Enhanced Congestion Aware Routing (MCAR), which includes medium-access control (MAC) -layer enhancements and a protocol. MCAR efficiently handles the mobility of high-priority data sources, at the outflow of debasing the performance of lowpriority traffic.
Full Paper

IJCST/42/3/B-1513
95 Different Combinations of Color and Noise in Captcha Generation

Kanika Singhal, R S Chadha

Abstract

Today, internet has become a global tool for accessing services whether it is education, entertainment or e-commerce. In order to register on these sites, a distorted image of pseudorandom letters and digits is to be entered at the end of the form in order to gain access to the service. That distorted image is called Captcha(Completely Automated Public Turing Test To Tell Computers And Humans Apart).In this paper we are presenting a technique of the combination of noise and color in Captcha which will make it more human friendly and difficult for automated programs to break the Captcha.
Full Paper

IJCST/42/3/B-1514
96 Detection of Error-Prone Software Modules Using Neural Network

U.Ankaiah, M.R.Narasinga Rao, V.Ramakrishna

Abstract

Software complexity metrics of a software module represent a measure of the functional complexity of the module. Identifying software modules based on their software complexity metrics into different error prone categories is a difficult problem in software engineering. This research investigates the applicability of neural network classifiers for identifying fault-prone software modules using a dataset from a software system. A prototype multi layer perceptron neural network classifier using a modified back propagation algorithm is constructed for this purpose. Our preliminary results suggest that a multi layer perceptron network can be used as a tool for identifying fault-prone software modules. Other issue such as representation of software metrics is also discussed.
Full Paper

IJCST/42/3/B-1515
97 Finding Best Probable Combinations of Similarity Measures Using Bayesian Networks

Kalpana Nigam, Monica Mehrotra

Abstract

The objective of establishing alignments between two entities from two different ontologies is focused on finding similarity between two strings. Out of several available similarity measures it becomes difficult to select best one or best combination. In this paper we explore the use of Bayesian Networks to aid decision making under uncertainty. We present here a method to find out the best possible combination of similarity measures which can be taken for aggregation for the computation of final similarity value of two strings by learning Bayesian Networks.
Full Paper

IJCST/42/3/B-1516
98 Wireless LAN Intrusion Prevention System (WLIPS) for Evil Twin Access Points

Sachin R. Sonawane, Sandeep Vanjale, Dr. P.B.Mane

Abstract

Nowadays, Wireless Access Points are popularly used for the convenience of mobile users. The growing popularity of Wireless Local Network (WLAN) put forth different dangers of wireless security attacks. The vicinity of Evil Access Points is a standout amongst the most difficult system security concerns for system administrator. Evil Twin Access Points, if undetected, can take important information from the network. Numerous attackers took advantages of the undetected Evil Twin Access Points in ventures to not just get free Internet Access, and yet to view classified informative content. The vast majority of the present results for identify Evil Access Points are not automated and subject to a particular wireless technology. Undetected Evil Twin Access Point is one of the genuine dangers in wireless local area network since utilizing it; attacker can start MITM and Evil Twin attack on the users. In this paper, we have presented a new approach for detection of Evil Twin attack in WLAN. An Evil Twin is essentially a rogue Wi-Fi Access Point that looks like authorized AP.
Full Paper

IJCST/42/3/B-1517
99 Multi Type Operating System for Android Phones

Ajit Singh Raghav, Ritika Chugh Malik

Abstract

In this paper we have designed and implemented and Android application that will enable users to interact with the windows operating system. This application is in the form of a desktop widget and supports OS version Android 2.3.3 and onwards. Multi OS windows manager will be setup to manage the appropriate phone system. Windows are automatically sized to fill the whole screen. The main focus was to develop and introduce a new application that boots the windows operating system inside the Android environment. It run like original windows and provides all features of windows operating system.
Full Paper

IJCST/42/3/B-1518
100 An Unused Bandwidth Utilization in Wireless Broad Band Networks

Ram Rajesh Maddineni, A.V.Praveen Krishna, Rama Krishna Gavirneni

Abstract

High-Speed Networks are capable of carrying many types of services for instance voice, data, images, and video. These services have different requirements in terms of bandwidth, cell loss, delay, etc. In order to provide quality of service, each application reserves bandwidth from Base Station (BS). However, it is difficult for the Subscriber Station (SS) to predict the amount of incoming data. To provide quality of service, subscriber station may reserve bandwidth than its demand. so, the allocation of bandwidth to every subscriber stations done by priority. The mechanism is to reserve the bandwidth by dynamic resource reservation mechanism [2], and it has the ability to dynamically reconfigure a network in order to gain benefit from network resources which is concerned with different types of services. The idea of proposed scheme is allocate bandwidth on priority and recycle the unused bandwidth in PMP mode.
Full Paper

IJCST/42/3/B-1519
101 A Data Mining Based Approach in IDS Design

Rakesh Yadav, Mahesh Malviya

Abstract

Security is major issue now in these days in different application level as well as in the network level applications and utilities. This paper is based on a new approach based on process mining. In daily use we use various computer based application and interacted through different processes. Some of the process is well known and they provide support for smart works. But some processes are malicious and interrupting different kinds of applications, in this project we are going to introduce the malicious processes classification for using it over IDS development. For that purpose we make efforts for analysing different processes collected from the server to client’s machines.
Full Paper

IJCST/42/3/B-1520
102 Comparative Study on Object Oriented Approach and Aspect Oriented Approach for the Software Development

Dishek J. Mankad

Abstract

This contents present a research where a new programming technique is applied to an established branch of the software development. The main purpose of this content was to check whether or not how easily aspect – oriented programming could be used in software development. The challenge in today’s technological environment is to keep evolving the older systems so that they are truly compatible with the real world technological environment. The most common approach was to migrate your code into the object oriented code. However there are many various paradigms that a software development might adopt. Aspect oriented technology is today’s most emerging concept for the software development that is receiving considerable attention from research and practitioner communities alike. The approach reverse engineering activities to abstract an object – oriented model from any code. The methodology was to migrate it incrementally by decomposing existing system into the notable sets of components, and each of this potentially implements an object. In this paper we are focus on the work done in evolving system using object oriented approach, then we analyze the impact of object oriented technology and aspect oriented technology on system development and the environment that is required to implement the two paradigm.
Full Paper

IJCST/42/3/B-1521
103 Web Based GIS for Disaster Management System

Manish Doshi, Chandresh Rathod

Abstract

Natural hazards like earthquakes, floods become disaster when they knock the human environment. To downsize the impact of every disaster, governments fix strategy, called disaster management. Availability of data such as lifeline systems, roads, hospitals and buildings will help the managers for better decision-making. This System explores fundamental principle of geography, the location is important in human life. It is used to inventory, analysis and manage many aspect of the world and it takes number & words from the database & put on map. Now in this time, Many organizations which involve in disaster management, require to access to the right data in the right time to make the right decisions. Using this system, managers of organizations can easy access information about disaster any time and any where they are. Disaster management can be divided into four major phases like: Planning, Mitigation and Preparedness are all preevent phases, Response and Recovery are two during and postevent phases. These phases are related by time and function to all types of emergencies/disasters. As disasters (earthquakes, floods and hurricanes…) are usually spatial events so all phases of disaster management depend on data from a variety of sources. So, Geographical Information System as a tool to collect, store, analyze and display large amount of spatially Information layers, supports all aspects of disaster management[4].In this Proposed System we are going to provide geo-graphic information in case of various emergencies for disaster in terms of map, reports, statistics and quick response via SMS or E-mail. The proposed web-based GIS is designed to work on Geo graphical data and it will process on the various maps. It also includes the processing of database information which is related to the maps. This GIS system not provides any information regarding Pre-event phases of disaster management that are Preparedness and planning.
Full Paper

IJCST/42/3/B-1522
104 Review Paper on Punjabi Text Mining Techniques

Shruti Aggarwal, Salloni Singla

Abstract

Text Mining is a field that extracts useful information from the text document according to users need which is not yet discovered. Text Mining is the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources .Text Classification is one of the text mining tasks to manage the information efficiently, by classifying the documents into classes using classification and clustering algorithms .Each text document is characterize by a set of features used in text classification method, where these features should be relevant to the task. This paper presents techniques for using text mining algorithm to identify the exact keyword of Punjabi newspaper and its text extraction.
Full Paper

IJCST/42/3/B-1523
105 Ways of Disseminating Messages in Delay Tolerant Networks

Sweety Soni, Shruti Gupta

Abstract

Delay-Tolerant Networks (DTNs) have the great potential to connecting devices and regions of the world that are presently under-served by current networks. A vital challenge for Delay Tolerant Networks is to determine the routes through the network without ever having an end to end, or knowing which “routers” will be connected at any given instant of time. The problem has an added constraint of limited size of buffers at each node. In this project we try to maximize the message delivery rate without compromising on the amount of message discarded. The amount of message discarded has a direct relation to the bandwidth used and the battery consumed. The more the message discarded more is the bandwidth used and battery consumed by every node in transmitting the message. we have proposed an algorithm where the messages are disseminated faster into the network with lesser number of replication of individual messages. The history of encounter of a node with other nodes gives noisy but valuable information about the network topology. Using this history, we try to route the packets from one node to another using an algorithm that depends on each node’s present available neighbors’/contact and the nodes which it has encountered in the recent past.
Full Paper

IJCST/42/3/B-1524
106 An Approach Towards Steadiness and Capability of Regular Wireless Networks

S.Sainath Reddy, P. Vijay Kumar

Abstract

For the position of individual Sensor Nodes (SNs), the node exploitation problem in Wireless Sensor Networks (WSNs) deals with strategies, such that the resultant network satisfies individual constraints. To scatter the sensor nodes randomly across the area to be monitored is a popular method of deploying WSNs. The problem of reasonable rate allocation that maximizes the network throughput in standard topologies of Wireless Sensor Networks (WSNs) is described in this paper. We need to find the optimal rate allocation for the individual end-to-end sessions that maximizes the total proportionally fair throughput of the network in order to monitor the entire coverage area of the WSN while maintaining acceptable network throughput. For the link layer transmission probabilities, we provide closed form expressions for the finest end-to-end session rates for topologies as well as the bounds. For regular WSNs with a slotted Aloha MAC layer we study the aforementioned problem, which provides us with a lower bound for more sensible MAC (Media access control) protocols. By using Telosb nodes our real world experiments validate our theories and results. The different regular topologies as the size of network grows and simulations carried out in Qualnet verify our comparisons.
Full Paper

IJCST/42/3/B-1525
107 SRGM Analyzers Tool of SDLC to Ensure Software Reliability and Quality

Sandeep P. Chavan, Dr. S. H. Patil

Abstract

In this paper, we have developed software analyzers tool for deriving several software reliability growth models based on Enhanced nonhomogeneous Poisson process (ENHPP) in the presence of imperfect debugging and error generation. The offered models are initially formulated for the case when there is no differentiation between failure observation and fault removal testing processes, and then continued for the case when there is a clear differentiation between failure observation and fault removal testing processes. Many software reliability growth models (SRGM) have been developed to describe software failures as a random process, and can be used to classify development status during testing. With software reliability growth, software engineers can easily measure (or forecast) the software reliability (or quality), and design software reliability growth charts. It is not easy to select the best tool for improving software quality. There are few SRGM in the literature of software engineering that differentiates between failure observation and fault removal processes. In real software development background, the number of failures checked need not be the same as the number of faults removed. Due to the elaboration of software systems, and an defective understanding of software, the testing team may not be able to discard the fault perfectly on observation of a failure, and the authentic fault may remain, resulting in a phenomenon known as defective debugging, or get replaced by another fault causing error generation. In the case of defective debugging, the error content of the software remains the same; while in the case of error generation, the error content increases as the testing progresses. Replacement of observed faults may result in the introduction of new faults.
Full Paper

IJCST/42/3/B-1526
108 Iris Data Indexing Method Using Biometric Features

P.S.Theakaraja, Dr. N.Rama Dass

Abstract

A biometric system provides identification of an individual based on a unique feature or characteristic possessed by the individual. Among the available biometric identification system, Iris recognition is regarded as the most reliable and accurate one. Demands are increasing to deal with large scale databases in these applications. The Segmentation in boundary detection, edge Mapping, circular Hough Transform, extracting Region of interest (Eyelash and noise removal), circle detection. In a module of Person Identification system using Iris Recognition. The iris recognition system consists of a segmentation that is based on the Hough transform and is able to localize the circular iris and pupil region, occluding eyelids and eyelashes and reflections. The extracted iris region was normalized into a rectangular block with constant dimensions to account for imaging inconsistencies. Finally, the data from Gabor filters was extracted and quantized to encode the unique pattern of the iris into a biometric template. To improve the efficiency of computational method and accuracy of classification, the Difference metric and subtraction method was employed. It was observed that this method classify the images with better accuracy. The Hamming distance was employed for classification of iris templates. The iris recognition is shown to be a reliable and accurate biometric technology.
Full Paper

IJCST/42/3/B-1527
109 A Survey on Distributed System Applications Using Design Pattern

Jayashree D. Jadhav, ManjushaJoshi, Dr.S.D.Joshi, Dr. R.M.Jalnekar

Abstract

Distributed systems envelop numerous ranges of computer science, for example machine architecture, networking, working frameworks, embedded devices, and security. On top of new technologies, such as remote and wearable Pcs, conveyed frameworks saturate regular life—in homes and at work and at live up to expectations. Different parts of distributed computing, for example performance, security, and unwavering quality, can effectively verify the victory or failure of numerous organizations, urban areas, or even nations. In spite of the fact that there are a hefty number of software development systems for standalone software, small endeavour is being paid into examining specialised techniques that focus on the advancement of Distributed Applications (DAS) in the time of Internet and web-based provisions. The key idea of this paper to study different latest applications which are developed for Distributed system using design patterns. This paper is also specifying working, advantages and weaknesses of some latest distributed system applications developed using different design patterns.
Full Paper

IJCST/42/3/B-1528
110 Investigation of Vulnerabilities in Cloud Environment: A Research Analysis

Arti Sharma, Pawanesh Abrol

Abstract

Cloud computing is gaining remarkable popularity currently because of its enormous advantages. However, there are several security challenges associated with cloud environment that inhibit the proper adoption of this technology. The problem of vulnerability is one of such challenges. Vulnerabilities make the system prone to external attacks and problems. Further, these keep on increasing exponentially every year. There is a need to efficiently classify and manage the vulnerabilities and to analyze them for identifying the measures to reduce them effectively. This will improve the security of the system. One way to deal with vulnerabilities is finding them and patch them. Several standards and efficient methods and tools are required to deal with this problem. The present study has been conducted to assess the vulnerabilities in a cloud environment. There are two major objectives. The first objective is to identify and understand different types of vulnerabilities in cloud environment. The second objective is to analyze and propose ways and means for effective handling of different vulnerabilities. On the basis of results obtained, the recommendations have been presented.
Full Paper

IJCST/42/3/B-1529
111 A Survey and Comparison of WordNet Based Semantic Similarity Measures

Ayesha Banu, Syeda Sameen Fatima, Khaleel Ur Rahman Khan

Abstract

Semantic Similarity relates to computing the similarity between concepts within ontology. We explore the different categories of approaches to compute semantic similarity and the most popular measures are evaluated using WordNet as the source ontology. We compare the measures using the benchmark dataset of Miller & Charles with WordNet to rank the measures category wise and overall.
Full Paper

IJCST/42/3/B-1530
112 Implementation of TCP Fast Open (TFO) and Proportional Rate Reduction (PRR) Mechanism by Google in TCP for Faster Web

M. Vijayalakshmi, G.Parthasarathi

Abstract

In the current cyber world, Google maintains the fastest web service as of date and they are planning to make it even better and faster in the future. Their main objective is to tackle challenges that make the Web slow and prevent it from delivering its full potential. All efforts made by Google, could be boosted by forthcoming protocols like HTML 5.0.Google’s research shows that the network latency with respect to web transfers can be reduced by over 10%, by increasing the initial congestion window from 3 packets to 10 packets. They also propose to reduce the initial timeout from 3 seconds to 1 second as the present Internet requires a timeout lesser than 3 seconds [3]. Thus Google proposes TCP Fast Open (TFO) protocol, which permits data transfer during TCP’s initial handshake.
Full Paper

IJCST/42/3/B-1531
113 Design and Analysis of Image De-Noising Filter Based on Pattern Recognition: A Practical Approach

Arun Agarwal, Jaswinder P Singh, Gaurav Srivastava

Abstract

Image filtering algorithms are applied on images to remove the different types of noise that are either present in the image during capturing or injected in to the image during transmission. Digital images when captured usually have Gaussian noise, Speckle noise and salt and pepper noise [10]. The performances of the filters are compared using the Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) [8-9]. In this paper we proposed a spatial domain filter based on pattern recognition technique [1] using varying size mask not necessarily square matrix but ranging from 4 x 4 to 7 x 7 size mask .In the proposed filter the MSE is reduced to 43 % in case of salt & pepper noise, 51 % for Gaussian noise, 52 % for speckle noise and PSNR is increased to 6 % in case of salt & pepper noise, 12 % for Gaussian noise, 12 % for speckle noise. The image used for the purpose of analysis is standard woman hat image taken by Ron Bucci.
Full Paper

IJCST/42/3/B-1532
114 Digital Signal Processing With Examples and Its Applications

Abhilash Sharma, Avikar Sharma, Ajay Kumar, Samriti Sharma

Abstract

Advances in digital audio technology are fueled by two sources: hardware developments and new signal processing techniques. When processors dissipated tens of watts of power and memory densities were on the order of kilobits per square inch, portable playback devices like an MP3 player were not possible. Now, however, power dissipation, memory densities, and processor speeds have improved by several orders of magnitude. Advancements in signal processing are exemplified b y Internet broadcast applications: if the desired sound quality for an internet broadcast used 16-bit PCM encoding at 44.1 kHz, such an application would require a 1.4 Mbps channel for a stereo signal! Fortunately, new bit-rate-reduction techniques in signal processing for audio of this quality are constantly being released. Increasing hardware efficiency and an expanding array of digital audio representation formats are giving rise to a wide variety of new digital audio applications. These applications include portable music playback devices, digital surround sound for cinema, highquality digital radio and television bro ad cast, Digital Versatile Disc (DVD), and many others. This paper introduces digital audio signal compression, a technique essential to the implementation of many digital audio applications. Digital audio signal compression is the removal of redundant or otherwise irrelevant information from a digital audio signal—a process that is useful for conserving both transmission bandwidth and storage space. We begin by defining some useful terminology. We then present a typical “encoder” (as compression algorithms are often called), and explain how it functions. Finally we consider some standards that employ y digital audio signal compression, and discuss the future of the field.
Full Paper

IJCST/42/3/B-1533
115 Botnet Construction and Monitoring Using Honeypot, Based on Network Security

Rajeswari.M, Bhavani.A, Kamaleswari.S.P

Abstract

A “botnet” consists of a network of compromised computers controlled by an attacker (“botmaster”). Recently botnets have become the root cause of many Internet attacks. Especially machines with broadband connection that are always on are a valuable target for attackers. Crackers use it for their own advantage. To be well prepared for future attacks, it is not enough to study how to detect and defend against the botnets that have appeared in the past. More importantly, we should study advanced botnet designs that could be developed by botmasters in the near future. In this paper, we present the design of an advanced hybrid peerto- peer botnet. Compared with current botnets, the proposed botnet is harder to be shut down, monitored, and hijacked. It provides robust network connectivity, individualized encryption and control traffic dispersion, limited botnet exposure by each bot, and easy monitoring and recovery by its botmaster. Possible defenses against this advanced botnet are suggested. In this paper we detect the attacker by using honeypot and also without using honeypot technique. “Honeypots” have been successfully deployed in many defense systems. A group of bots, referred to as a botnet, is remotely controllable by a server and can be used for sending spam mails, stealing personal information, and launching DDoS attacks.
Full Paper

IJCST/42/3/B-1534
116 A Literature Review of Various Color Constancy Techniques in Digital Image Processing

Buta Singh, Ashok Kumar Bathla

Abstract

This paper presents an overview of various color constancy techniques. Color constancy is the ability to estimate the color of the light source. Different illuminants may impact the appearance of an image as compared to the image taken under canonical light source. Human vision has the natural tendency to estimate the color of light source but this mechanism is not fully understood. So, This paper presents various computational methods to estimate the effect of color of different light sources onto a digital image. This paper presents a survey of various color constancy techniques under different illumination conditions. Experimental and visual results show the comparative performance analysis of various color constancy techniques.
Full Paper

IJCST/42/3/B-1535
117 A Review on Security Concerns on Internet Voting

Jagdish B. Chakole, P. R. Pardhi

Abstract

Nowadays each and every manual system is converting in computerize form which increases efficiency and reduce response time. Voting system is also taking advantages of computerization, now it is a time to take advantages of Internet for voting. In our proposed system, we are interested in secure online internet voting, for availability we will use redundancy of components of the system. For security and confidentiality we will be interested in digital signature and cryptography.
Full Paper

IJCST/42/3/B-1536
118 Fine Grained Distributed Firewalls Anomaly Detector and Solver With Policy-Based Segmentation Technique

Rajesh B, Kiran Kumar D, Sunil Kumar B, Poorna Chander V

Abstract

To provide the business application services over internet, Firewalls are among the most pervasive network security mechanisms, deployed extensively from the borders of networks to end systems. With the advent of global Internet connection, network security has gained significant attention in research and industrial communities. Day to day increasing threat of various network attacks, firewalls have become important elements for various sizes of applications. But still our business services(web applications) are suffering from unintended security leakages by unauthorized actions and designing and managing Web access control policies are often error-prone due to the lack of effective analysis mechanisms.Firewall policy configuration is an important factor to determine the firewall security efficiency. In this paper, we introduced Distributed Firewall Anomaly Detector and Solver (DFADS) to detect all anomalies that could exist in a single- or multi-firewall environment. We also present some techniques and approaches to automatically discover policy anomalies in centralized and distributed legacy firewalls.DFADS usea policybased segmentation technique to accurately identify policy anomalies and derive effective anomaly resolutions, along with an intuitive visualization representation of analysis results. Our experiments also supporting that this mechanism is efficiently resolving the policy configuration problems.
Full Paper

IJCST/42/3/B-1537
119 Modelling of Requirement Elicitation for the Complete Banking System Using Agent Goal Decision Information Approach

Sandeep Mathur, Girish Sharma, A K Soni

Abstract

Most of the data ware house project fails to meet the business requirements and business goals. One of the reasons for this is that requirement analysis is typically not done with keeping the actual working conditions and situations. This leads the improper requirement engineering phase. The chaos all through the development of requirements evolves due to disparity between users and developers resulting in project devastations and terminations. Building a data warehouse is a very challenging task. Data warehouse quality depends on the quality of its requirement engineering models. Agent orientation is emerging as a unique paradigm for constructing Data ware house. Agent oriented systems are expected to be more powerful, more flexible, and more robust than conventional software systems. Here presenting the detail discussion of agent oriented methodology based model AGDI used in early as well as late requirement elicitation. This approach is illustrated through a case study of the general banking system in [24] for which Data Ware house is to be built to support decisional goals. In this paper the complete solution for banking system is explored
Full Paper

IJCST/42/3/B-1538
120 Dynamic Virtual Machine Based Proactive Fault Tolerant Scheme Over Cloud

Deepshikha Goutam, Ashok Verma, Neha Agrawal

Abstract

An important concerns in a cloud based environment are security, process fail rate and performance. A lot of research is currently underway to analyze how clouds can provide fault tolerance for an application. When numbers of processes are too many and any virtual machine is overloaded then the processes are failed causing lot of rework and annoyance for the users. The major cause of the failure of the processes at the virtual machine level are overloading of virtual machines, extra resource requirements of the existing processes etc. This work introduces dynamic load balancing techniques for cloud environment which proactively decides whether the process can be applied on an existing virtual machine or it should be assigned to a different virtual machine created a fresh or any other existing virtual machine. so, In this way it can tackle the occurrence of fault. Cloud computing load balancing of process load has been researched previously but proactive load balancing is an area where lot of work is still to be done and this paper proposes a mechanism which proactively decides the load on virtual machines and according to the requirement either creates a new virtual machine or uses an existing virtual machine for the assigning the process. Once a process completes it will update the virtual machine status on the broker service so that other processes can be assigned to it.
Full Paper

IJCST/42/3/B-1539
121 Efficiently Controlling the Traffic in Wireless Network

B.V.Srinivasulu, J.Mahalakshmi

Abstract

The Internet takes an central role in our communications infrastructure, the slow convergence of steering protocols after a network failure becomes a growing problem. To assure fast recovery from bond and node failures in IP networks, we present a new recovery scheme called Multiple Steering Pattern (MSP). MSP is based on keeping additional steering information in the routers, and allows packet ahead to continue on an alternative output bond immediately after the detection of a failure. Our proposed scheme guarantees recovery in all single failure scenarios, using a single mechanism to handle both bond and node failures, and without knowing the root cause of the failure. MSP is strictly connectionless, and assumes only destination based hop-by-hop ahead. It can be implemented with only minor changes to existing solutions. In this paper we present MSP, and analyze its performance with respect to scalability, backup path lengths, and load distribution after a failure.
Full Paper

IJCST/42/3/B-1540
122 Network Coding with MGM based Anycast Packet Transmission in Vehicular Ad-Hoc Networks

Ankit Patel, Zunnun Narmawala

Abstract

Mobile Ad-hoc Network routing protocols such as AODV, DSR etc. fail in scenario in which no contemporaneous path exists between source and destination because they try to find end-toend path before data transmission which is not exist in VANET which increase delivery delay and decrease delivery ratio. So VANET uses ‘store-carry-forward’ paradigm. In network coding, source node or intermediate node allows to combine number of packets it has received or generated into one or several outgoing packets. Reliability is one of the issue in network coding. So we use network coding with multi generation mixing in which packets are grouped into generations and generations are grouped into mixing set. We propose Anycast routing protocol for VANET which uses ‘Network coding with multi generation mixing’ to improve the performace. By using simulation we compared the performance of proposed protocol in terms of delay, delivery ratio and throughput with the same protocol using network coding and using conventional scheme. Simulation results suggest, our protocol achieves significantly less delay, higher delivery ratio and higher throughput compared to network coding based scheme and conventional scheme.
Full Paper

IJCST/42/3/B-1541
123 A Novel Approach for Highly Decentralized Information Accountability for Data Sharing in Cloud

Velpula.Nagi Reddy, M Gnana Vardhan

Abstract

Cloud computing is a technology, which uses internet and remote servers to stored data and application. Cloud computing provides on demand services. Multiple users want to do business of their data using cloud but they get fear to losing their data. While data owner will store his/her data on cloud , he must get confirmation that his/her data is safe on cloud. To solve above problem in this paper we provide effective mechanism to track usage of data using accountability. Accountability is checking of authorization policies and it is important for transparent data access. We provide automatic logging mechanisms using JAR programming which improves security and privacy of data in cloud. Using this mechanism data owner may know his/her data is handled as per his requirement or service level agreement.
Full Paper

IJCST/42/3/B-1542
124 A Literature Survey on Removal of Ambiguities in Stereo Images

Simi Thakur, Gianetan S. Sekhon

Abstract

This paper presents the study of ambiguities presents in Stereo Images. Stereo vision (stereo means “Solid” or “three dimensional” and vision means “appearance” or “sight”) is the impression of depth that is perceived when a scene is viewed with both eyes. There are several ambiguities are there, when any algorithm matches the two images present in image set, these ambiguities are related to either noise, speed, efficiency or reliability. The study of various techniques of stereo matching is considered in this work. To resolve the ambiguity problem in stereo matching many methods also have been studied in this work. Define different ambiguities present in stereo images and comparative analysis of different algorithms is the major contribution of this survey paper.
Full Paper

IJCST/42/3/B-1543
125 Efficiently Measuring Similarities Between Objects in Different Views of Hierarchical Clustering

Mahammad Nazini, MD Roshna, Shaik. Jakeer Hussain

Abstract

Clustering is one of the most interesting and important topics in data mining. The aim of clustering is to find intrinsic structures in data, and organize them into meaningful subgroups. The main concept is similarities/dissimilarities measure from multiple viewpoints. The existing algorithms for text mining make use of a single viewpoint for measuring similarity between objects. Their drawback is that the clusters can’t exhibit the complete set of relationships among objects. To overcome this drawback, we propose a new similarity measure known as Hierarchical multiviewpoint based similarity measure to ensure the clusters show all relationships among objects. We also proposed two clustering methods. The empirical study revealed that the hypothesis “multi-viewpoint similarity can bring about more informative relationships among objects and thus more meaningful clusters are formed” is proved to be correct and it can be used in the real time applications where text documents are to be searched or processed frequently.
Full Paper

IJCST/42/3/B-1544
126 Autonomic Fault Tolerant Framework for Web Applications

Dhananjaya Gupta, Anju Bala

Abstract

Cloud computing is an emerging practice that offers more flexibility in infrastructure and reduces cost than our traditional computing models. Reliability, availability in Cloud computing are critical requirements to ensure correct and continuous system operation also in the presence of failures. Fault tolerance mechanism needs to be developed to deal with faults that can affect the normal operations in cloud environment. Web application because of growing demand is an area of concern in the IT security community. In this paper, a framework for providing a fault tolerance system for web based applications in cloud environment has been designed. Incoming requests for the web applications are balanced between the web servers and data generated is kept in multiple clusters that maintains data in replicated fashion. Storage clusters are continuously monitored for any kind of fault which could halt normal functioning of the system because of data loss. The experimental results depicts that the proposed framework monitors and handles various faults effectively.
Full Paper

IJCST/42/3/B-1545
127 Optimizing the Network Failures by Self-Determining in Independent Direct Acyclic Digraph

Sk.AliMoon, B.V.Praveen Kumar

Abstract

As the Internet takes an increasingly central role in our communications infrastructure, the slow convergence of routing protocols after a network failure becomes a growing problem. To assure fast recovery from link and node failures in IP networks, we present a concept Directed Acyclic Graph. We develop algorithms to compute link-independent and node-independent DAGs. The algorithm guarantees that every edge other than the ones originating from the root may be used in either of the two nodedisjoint DAGs in a two-vertex-connected network. The algorithms achieves multipath routing, utilize all possible edges, guarantee to recover the single node failures and reduce the number of overhead bits required in the packet header. Moreover, the use of multiple non disjoint paths is advantageous in load balancing and preventing snooping on data, in addition to improving resiliency to multiple link failures.
Full Paper

IJCST/42/3/B-1546
128 A Method for Tamper proofing Images Using DCT Watermarking

Jobin Abraham

Abstract

This paper proposes a blind method for tamper proofing images using the watermarking techniques. The image is subdivided in to non-overlapping blocks. These are then sequentially indexed using watermarking. DCT transformation is applied on the image as well on the watermark information, yielding a set of coefficients. These transformed parameters are further processed to hide the index block number imperceptibly. Thus, every segment or region in the image carries a unique number. The proposed technique is also suitable with slight modifications for fingerprinting the images and also for covert communication by hiding the actual details.
Full Paper

IJCST/42/3/B-1547
129 Ambiguity Detection Algorithm for Context free Grammar

Saurabh Kumar Jain, Ajay Kumar

Abstract

One way to verifying a grammar is the detection of ambiguity. Unfortunately, ambiguity problem for context-free grammars is undecidable. Ambiguity in context-free grammars is a recurring problem in language design and parser generation, as well as in applications where grammars are used as models of real-world physical structures. Context-free grammars are widely used but still hindered by ambiguity. We observe that there is simple linguistic characterization of the grammar ambiguity problem .This problem divided into form of horizontal and vertical ambiguity. We show the conservative approximation for ambiguity problem. The proposed methodology have implemented in Java.
Full Paper

IJCST/42/3/B-1548