IJCST Logo



 

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE & TECHNOLOGY (IJCST)-VOL III ISSUE III, VER. 6, JULY TO SEPTEMBER, 2012


International Journal of Computer Science and Technology Vol. 3 Issue 3, Ver. 6
S.No. Research Topic Paper ID
   212 Efficiently Detecting the Active Worm
Yugandhar Tirumani, Gumpula Raju

Abstract

Active worm’s major security threats to the Internet. This is due to the ability of active worms to propagate in an automated fashion as they continuously compromise computers on the Internet. Active worms evolve during their propagation, and thus, pose great challenges to defend against them. In this paper, we investigate a new class of active worms, referred to as Camouflaging Worm (C-Worm in short). The C-Worm is different from traditional worms because of its ability to intelligently manipulate its scan traffic volume over time.Thereby, the C-Worm camouflages its propagation from existing worm detection systems based on analyzing the propagation traffic generated by worms. We analyze characteristics of the C-Worm and conduct a comprehensive comparison between its traffic and nonworm traffic (background traffic). We observe that these two types of traffic are barely distinguishable in the time domain. However, their distinction is clear in the frequency domain, due to the recurring manipulative nature of the C-Worm. Motivated by our observations, the design a novel spectrum-based scheme to detect the C-Worm. Our scheme uses the Power Spectral Density (PSD) distribution of the scan traffic volume and its corresponding Spectral Flatness Measure (SFM) to distinguish the C-Worm traffic from background traffic. Using a comprehensive set of detection metrics and real-world traces as background traffic, we conduct extensive performance evaluations on our proposed spectrum-based detection scheme. The performance data clearly demonstrates that our scheme can effectively detect the C-Worm propagation. Furthermore, we show the generality of our spectrum-based scheme in effectively detecting not only the C-Worm, but traditional worms as well. In the existing system, traditional worms are more threats to the internet and also would produce lot of overall network traffic. It is very easy to identify the worm using traditional worm detection as the overall network traffic is increased. In the proposed model, camouflage worm is modeled and detection using spectrum based approach. Worm targets only vulnerable node so that overall traffic level is not increased. Spectrum based approach which is used to kill the C-worm. Modifications are mode in designing a worm which is used to increase the CPU load in the system, and also compared with traffic level of a application initiation and the C-worm. This process makes very clear process of execution.
Full Paper

IJCST/33/6/
A-1064
   213 An Efficient Algorithm for Reverse Adjacent Neighbor Query
L. Srinivasara Rao, P. Vinaya Kumari, K. Nageswara Rao

Abstract

Reverse nearest neighbor queries are useful in identifying objects that are of significant influence or importance. Existing methods either rely on pre-computation of nearest neighbor distances, do not scale well with high dimensionality, or do not produce exact solutions. In this work We presents a novel algorithm for Incremental and General Evaluation of continuous Reverse Nearest neighbor queries (IGERN, for short). The IGERN algorithm is general in that it is applicable for both continuous monochromatic and bichromatic reverse nearest neighbor queries. This problem is faced in a number of applications such as enhanced 911 services and in army strategic planning. A main challenge in these problems is to maintain the most up-to-date query answers as the data set frequently changes over time. Previous algorithms for monochromatic continuous reverse nearest neighbor queries rely mainly on monitoring at the worst case of six pie regions, whereas IGERN takes a radical approach by monitoring only a single region around the query object. The IGERN algorithm clearly outperforms the state-of-the-art algorithms in monochromatic queries. We also propose a new optimization for the monochromatic IGERN to reduce the number of nearest neighbor searches. Furthermore, a filter and refine approach for IGERN is proposed for the continuous evaluation of bichromatic reverse nearest neighbor queries which is an optimized version of our previous approach. The computational complexity of IGERN is presented in comparison to the state-ofthe- art algorithms in the monochromatic and bichromatic cases. In addition, the correctness of IGERN in both the monochromatic and bichromatic cases, respectively, are proved. Extensive experimental analysis using synthetic and real data sets shows that IGERN is efficient, is scalable, and outperforms previous techniques for continuous reverse nearest neighbor queries.
Full Paper

IJCST/33/6/
A-1065
   214 Estimation for Object Access in Large Scale Distributed Systems
P. Venkatrayudu, T. Sudha Rani

Abstract

Large scale distributed system provides an attractive scalable infrastructure for network applications. In such environment there exist large sets of heterogeneous and geographically distributed resources. These resources can be aggregated as a virtual computing platform for executing large-scale scientific applications. Among numerous optional resources, selecting appropriate resources for applications is challenging and affected by many factors. The loosely coupled nature of large-scale distributed environment makes data access unpredictable and instability. Slow allocation process may offset the benefit obtained by running a job on a fast node. Besides, the operation condition of a resource provider changes rapidly. The status of job execution and computing capability of a resource provider need to be considered dynamically. In this paper, we presented decentralized, scalable, and efficient resource selection techniques based on accessibility. Our techniques rely only on local, historic observations, so it is possible to keep network overhead tolerable. We showed our estimation techniques are sufficiently accurate to provide a meaningful rank order of nodes based on their accessibility.
Full Paper

IJCST/33/6/
A-1066
   215 Efficiently Identifying Human Activities
Sri Ramakrishna Vasamsetty, Joshua Daniel Marri, Charan Teja Chavithina

Abstract

Discriminative approaches for human pose estimation model the functional mapping, or conditional distribution, between image features and 3D pose. Learning such multi-modal models in high dimensional spaces, however, is challenging with limited training data; often resulting in over fitting and poor generalization. To address these issues latent variable models (LVMs) have been introduced. Shared LVMs attempt to learn a coherent, typically non-linear, latent space shared by image features and 3D poses, distribution of data in that latent space, and conditional distributions to and from this latent space to carry out inference. Discovering the shared manifold structure can, in itself, however, be challenging. In addition, shared LVMs models are most often non-parametric, requiring the model representation to be a function of the training set size. We present a parametric framework that addresses this shortcoming. In particular, we learn latent spaces, and distributions within them, for image features and 3D poses separately first, and then learn a multi-modal conditional density between these two low- dimensional spaces in the form of Gaussian Mixture Regression. Using our model we can address the issue of over fitting and generalization, since the data is denser in the learned latent space, as well as avoid the necessity of learning a shared manifold for the data.
Full Paper

IJCST/33/6/
A-1067
   216 Glow Sort Display with Optimal Properties for Dinless and Noisy Glow Icon Possession
Sri Ramakrishna Vasamsetty, Sowmya Dokku

Abstract

Digital camera sensors are inherently sensitive to the near infrared part of the light spectrum. In this paper, we propose a general design for glow sort displays that allow the joint capture of visible icons using a single sensor. We pose the CFA design as a novel spatial domain optimization problem, and provide an efficient iterative procedure that finds (locally) optimal solutions. Numerical experiments confirm the effectiveness of the proposed CFA design, which can simultaneously capture high quality visible and NIR icon pairs. Digital glow cameras acquire glow icons by means of a sensor on which a glow sort display (CFA) is overlaid. The Bayer CFA dominates the consumer market, but there has recently been a renewed interest for the design. Robustness to din is often neglected in the design, though it is crucial in practice. In this paper, we present a new periodic CFA which provides, by construction, the optimal tradeoff between robustness to aliasing, chrominance din and luminance din. Moreover, a simple and efficient linear demosaicking algorithm is described, which fully exploits the spectral properties of the CFA.
Full Paper

IJCST/33/6/
A-1068
   217 Enhanced Way of Biometric Signature Verification
Sri Ramakrishna Vasamsetty, M Raj Kiran Vakkalanka

Abstract

An off-line signature verification system attempts to authenticate the identity of an individual by examining his/her handwritten signature, after it has been successfully extracted from, for example, a cheque, a debit or credit card transaction slip, or any other legal document. The questioned signature is typically compared to a model trained from known positive samples, after which the system attempts to label said signature as genuine or fraudulent. The signature is divided into zones using both the Cartesian and polar coordinate systems and two different histogram features are calculated for each zone: histogram of oriented gradients (HOG) and histogram of local binary patterns (LBP). In this paper, we propose an effective method to perform off-line signature verification based on intelligent techniques. Structural features are extracted from the signature’s contour using the Modified Direction Feature (MDF) and its extended version: the Enhanced MDF (EMDF). Two neural network-based techniques and Support Vector Machines (SVMs) were investigated and compared for the process of signature verification. The results are obtained by modeling the signatures with a Support Vector Machine (SVM) trained with genuine samples and random forgeries, while random and simulated forgeries have been used for testing it.
Full Paper

IJCST/33/6/
A-1069
   218 Predict Collective Behavior in Social Media
Venkata Srinivas Jonnalagadda, A Harish

Abstract

Modularity maximization is exploited to extract social dimensions. With huge number of actors, the dimensions cannot even be held in memory. we propose an effective edge-centric approach to extract sparse social dimensions. The study of collective behavior is to understand how individuals behave in a social network environment. Oceans of data generated by social media like Facebook, Twitter, Flicker and YouTube present opportunities and challenges to studying collective behavior in a large scale. In this work, we aim to learn to predict collective behavior in social media. In particular, given information about some individuals, how can we infer the behavior of unobserved individuals in the same network? A social-dimension based approach is adopted to address the heterogeneity of connections presented in social media. the networks in social media are normally of colossal size, involving hundreds of thousands or even millions of actors. The scale of networks entails scalable learning of models for collective behavior prediction. To address the scalability issue, we propose an edge-centric clustering scheme to extract sparse social dimensions. With sparse social dimensions, the social dimension based approach can efficiently handle networks of millions of actors while demonstrating comparable prediction performance as other non-scalable methods.
Full Paper

IJCST/33/6/
A-1070
   219 Secure System in Privacy Networks
D. Prema Sagar, B. Sowjanya Rani

Abstract

Anonymizing networks such as Tor allow users to access internet services privately by using a series of routers to hide the client’s IP address from the server. The success of such networks However has been limited by users employing this anonymity for abusive purposes such as defacing popular web sites. Web site administrators routinely rely on IP-address for blocking or disabling access to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As a result, administrators block all known exit nodes of anonymizing networks, denying anonymous access misbehaving and behaving users alike. To address this problem, we present Nymble, a system in which servers can “blacklist” misbehaving users, thereby blocking users without compromising their anonymity. Our system is thus agnostic to different server definitions of misbehavior servers can blacklist users for whatever reason, and the privacy of blacklisted users is maintained.
Full Paper

IJCST/33/6/
A-1071
   220 Secret Key for Group Members
SK. Abdul Rasheed, V. Kishore, SK. Akbar

Abstract

Message passing from one source to another has become a key for many upcoming technologies. A secret sharing scheme is a method which distributes shares of a secret to a set of participants in such a way that only specified groups of participants can reconstruct the secret by poling their shares. Secret sharing is related to key management and key distribution. These problems are common to all crypto systems. Secret sharing is also used in multi-party secure protocols. Future, secret sharing schemes have natural applications in access control and cryptographic key initialization. Key transfer protocols rely on a mutually trusted key generation center (KGC) to select session keys and transport session keys to all communication entities secretly.
Full Paper

IJCST/33/6/
A-1072
   221 Cued Click Points for Graphical Password Authentication System using Sound Signature
M. Suresh, B. Sowjanya Rani

Abstract

The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. To address this problem, some researchers have developed authentication methods that use pictures and audio as passwords. Graphical passwords are an alternative to alphanumeric passwords in which users click on images to authenticate themselves rather than type alphanumeric strings. This paper reports the design and evaluation of the Audio-Visual Associative Protocol (AVAP) an authentication scheme relying on the previously-proven efficacy of pictorial passwords and on the benefits of non-speech audio, thus exploiting previously untapped human associative-memory strengths.
Full Paper

IJCST/33/6/
A-1073
   222 Energetic Steer Allotment for Wireless Sector-Based Multimedia Relay Multicast Tune (MRMT)
K. Rajakumari, S. Venugopal

Abstract

Softwares required to run energetic steer allotment for wireless sector-based multicast and relay tune documentation, energetic steer allotment for wireless sector-based multicast and relay tune related work, functional requirements for energetic steer allotment sector based on multicast and broad cast tunes, energetic steer allotment for wireless sector-based multicast and relay tune ppt, energetic steer allotment for wireless sector-based multicast and relay tune ppts, future enhancement of energetic steer allotment for wireless sector-based multicast and relay tune, energetic steer assignment in ns2 code, free download full project for Energetic steer Allotment for wireless sector-based Multicast and Relay Tune, future enhancement for steer allotment in sector based technology, energetic steer allotment for wireless sector-based multicast and relay tune project description, steer allocatioion in wireless sector based, energetic steer allotment for wireless sector-based multicast and relay tune source code, final year project report on steer allotment for wireless sector-based multicast and relay tune tutorial.Multimedia Relay Multicast Tune (MRMT) to deliver multicasting content in the Universal Mobile Telecommunications System (UMTS). In MRMT, the common logical steer is enabled to serve multiple MRMT calls at the same time. Use of the common logical steer may cause interference to the dedicated logical steers serving the traditional calls. To more efficiently utilize the radio resource to serve both traditional and MRMT calls, this paper proposes two steer allotment algorithms: Reserved Resource for Multicasting (RRM) and Unreserved Resource for Multicasting (URM). We propose analytic models and conduct simulation experiments to investigate customer Satisfaction Indication (SI) for the two algorithms.
Full Paper

IJCST/33/6/
A-1074
   223 File Replication and Consistency Maintenance Techniques for Achieves of High System Performance in P2P Systems
N Nagasubrahmanyeswari, Gangadhara Rao Dasari

Abstract

P2P is a trendy technology used for file sharing. File replication and Consistency maintenance are the methods used in P2P for elevated system performance. File replication methods indicate replica nodes without thinking about consistency maintenance which may lead to high overhead for redundant file replications and consistency maintenance. Consistency maintenance methods update files without considering file replication dynamism which may not give the accuracy of replica consistency. Hence there is a need to think about consistency maintenance while file replication to achieve high performance and high availability. When data files are replicated at many nodes, consistency must be maintained among the nodes. In this paper we point out different replication strategies that are applied P2P systems, followed by consistency maintenance techniques intended for high performance and high availability of data. Finally we explore a combined method of file replication and consistency maintenance.
Full Paper

IJCST/33/6/
A-1075
   224 Online Assessment and Learning Process using Scalable System in Smart Space Using Web Service Technology
Nallareddy Venkateswara Reddy, G. Subba Lakshmi, Ch. Raja Jacob

Abstract

In this project we are going to give the real time interactive virtual Learning System with teleeducation and E-Assessment experience which is an important approach in distance learning. However, most current systems fail to meet new challenges in extensibility and scalability, which mainly lie with three issues. First, open system architecture is required to better support the integration of increasing human-computer interfaces and personal mobile devices in the classroom. Second, the learning system should facilitate opening its interfaces, which will help easy deployment that copes with different circumstances and allows other learning systems to talk to each other. Third, problems emerge on binding existing systems of classrooms together in different places or even different countries such as tackling systems intercommunication and distant intercultural learning in different languages. We have designed and implemented generic support for assessment that is based on assignments that students submit as electronic documents. In addition to assignments that are graded by teachers, we also support assignments that can be automatically tested and evaluated, e.g., assignments in programming languages, or other formal notations.
Full Paper

IJCST/33/6/
A-1076
   225 Efficiently Reducing the Firewall Rules
K. Venkata Nagakiran, K. Mallikarjuna Mallu, P.P.S. Naik

Abstract

A firewall is a security guard placed between a private network and the outside Internet that monitors all incoming and outgoing packets. The function of a firewall is to examine every packet and decide whether to accept or discard it based upon the firewall’s policy. This policy is specified as a sequence of (possibly conflicting) rules. When a packet comes to a firewall, the firewall searches for the first rule that the packet matches, and executes the decision of that rule. With the explosive growth of Internetbased applications and malicious attacks, the number of rules in firewalls have been increasing rapidly, which consequently degrades network performance and throughput. In this paper, we propose Firewall Compressor, a framework that can significantly reduce the number of rules in a firewall while keeping the semantics of the firewall unchanged. In this paper we consider a classical algorithm that we adapted to the firewall domain. We call the resulting algorithm “Geometric Efficient Matching” (GEM). The GEM algorithm enjoys a logarithmic matching time performance. However, the algorithm’s theoretical worst-case space complexity is O(n4) for a rule-base with n rules. Because of this perceived high space complexity
Full Paper

IJCST/33/6/
A-1077
   226 Comparison of Apriori and k-Means by using Text Mining
T. Satish, U. Ramkumar

Abstract

Text mining is a burgeoning new field that attempts to glean meaningful information from naturallanguage text. It may be loosely characterized as the process of analyzing text to extractinformation that is useful for particular purposes. Compared with the kind of data stored indatabases, text is unstructured, amorphous, and difficult to deal with algorithmically. Nevertheless,in modern culture, text is the most common vehicle for the formal exchange of information. Thefield of text mining usually deals with texts whose function is the communication of factualinformation or opinions, and the motivation for trying to extract information from such textautomatically is compelling—even if success is only partial. To implementation of text mining we use basic text mining algorithms of Apriori and k-means algorithms .And implement the these two algorithms and compare each other’s on the basis on time complexity and space complexity by providing visualize the graphs .And generate the association rules for the text mining by using these algorithms.
Full Paper

IJCST/33/6/
A-1078
   227 Cloud Storage in Maintaining Integrity Proof in Real Data
Aparna Allada, Vaddimukkala Nagabushanam

Abstract

One of the important concerns that need to be addressed is to assure the customer of the integrity i.e. correctness of his data in the cloud. As the data is physically not accessible to the user the cloud should provide a way for the user to check if the integrity of his data is maintained or is compromised. In this paper we provide a scheme which gives a proof of data integrity in the cloud which the customer can employ to check the correctness of his data in the cloud. This proof can be agreed upon by both the cloud and the customer and can be incorporated in the Service level agreement (SLA). This scheme ensures that the storage at the client side is minimal which will be beneficial for thin clients.
Full Paper

IJCST/33/6/
A-1079
   228 Efficiently Improving Associations Among Items & Weakness in Cluster –TMCM Algorithm
B. Yedukondalu, G. Srinivasa Rao

Abstract

Most of the existing clustering approaches concentrate on purely numerical or categorical data only, but not the both. In general, it is a nontrivial task to perform clustering on mixed data composed of numerical and categorical attributes because there exists an awkward gap between the similarity metrics for categorical and numerical data. In this paper, a method based on the ideas to explore the relationship among categorical attributes’ values is presented. This method defines the similarity among items of categorical attributes based on the idea of co occurrence. All categorical values will be converted to numeric according to the similarity to make all attributes contain only numeric value. Since all attributes has become homogeneous type of value, existing clustering algorithms can be applied to group these mixed types of data without pain. Nevertheless, most of the existing clustering algorithm may have some limitations or weakness in some way. In this paper, a two-step method is applied to avoid above weakness. At the first step, HAC (hierarchical agglomerative clustering) [3] algorithm is adopted to cluster the original dataset into some subsets. The formed subsets in this step with adding additional features will be chosen to be the objects to be input to k-means in next step. Since every subset may contain several data points, applying chosen subsets as initial set of clusters in k-means clustering algorithm will be a better solution than selecting individual data. Another benefit of applying this strategy is to reduce the influences of outlier, since the outlier will be smoothed by these added features. The results show that this proposed method is a feasible solution for clustering mixed numeric and categorical data.
Full Paper

IJCST/33/6/
A-1080
   229 Data Publishing Privacy Measures: Closeness
B. Srinivas, K. T. V. Subbarao, M. Bala Krishna

Abstract

Government agencies and other organizations often need to publish microdata, e.g., medical data or census data, for research and other purposes. Typically, such data is stored in a table, and each record (row) corresponds to one individual. Each record has a number of attributes, which can be divided into the following three categories. (1) Attributes that clearly identify individuals. These are known as explicit identifiers and include, e.g.,Social Security Number. (2) Attributes whose values when taken together can potentially identify an individual. These are known as quasi-identifiers, and may include, e.g., Zip-code, Birth-date, and Gender. (3) Attributes that are considered sensitive, such as Disease and Salary. When releasing microdata, it is necessary to prevent the sensitive information of the individuals from being disclosed. we propose a new notion of privacy called “closeness”. We first present the base model t-closeness, which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table (i.e., the distance between the two distributions should be no more than a threshold t). We then propose a more flexible privacy model called (n, t)-closeness that offers higher utility. We describe our desiderata for designing a distance measure between two probability distributions and present two distance measures. We discuss the rationale for using closeness as a privacy measure and illustrate its advantages through examples and experiments.
Full Paper

IJCST/33/6/
A-1081
   230 Efficiently Reduce the Relationships among the Database Tables using Markov Chain and Diffusion Map
G. Tatayyanaidu, K. T. V. Subba Rao, M. Bala Krishna

Abstract

when the database tables or nodes in a graph contain more than one relationship, it will create complexity. This paper precisely proposes a link-analysis-based technique allowing discovering relationships existing between elements of a relational database or, more generally, a graph. More specifically, this work is based on a random walk through the database defining a Markov chain having as many states as elements in the database. Suppose, for instance, we are interested in analyzing the relationships between elements contained in two different tables of a relational database. To this end, a two-step procedure is developed. First, a much smaller, reduced, Markov chain, only containing the elements of interest—typically the elements contained in the two tables—and preserving the main characteristics of the initial chain, is extracted by stochastic complementation. stochastic complementation considerably reduces the original graph and allows to focus the analysis on the elements of interest, without having to define a state of the Markov chain for each element of the relational databaseAn efficient algorithm for extracting the reduced Markov chain from the large, sparse, Markov chain representing the database is proposed. Then, the reduced chain is analyzed by, for instance, projecting the states in the subspace spanned by the right eigenvectors of the transition matrix called the basic diffusion map
Full Paper

IJCST/33/6/
A-1082
   231 Proficient Force Bogus Report Revealing Algorithm in Wireless Feeler Group
T. N. V. S Praveen, K. T. V. Subbarao

Abstract

Intruders can inject bogus reports via compromised nodes and launch DoS attack against legitimate reports in wireless feeler groups (WSNs). For many applications in wireless feeler groups, users may want to continuously extract data from the groups for analysis later. In this paper, force Proficient Sleep/awake scheduling algorithm is proposed along with the dynamic en-route filtering scheme. As the feeler nodes are allowed to sleep periodically under certain condition, it reduces the force consumption of all nodes including cluster head. The cluster head no need of collecting all the feeler nodes data instead they need only the cluster member that are awake. The dynamic en-route filtering scheme addresses both bogus report injection and DoS attacks in wireless feeler groups. Each node send its key to forwarding nodes and then disclose their keys by verify their reports using forwarding nodes. In Hill Climbing, key dissemination approach ensures the nodes closer to data sources has stronger filtering capacity and also drops fabricated reports en-route without symmetric key sharing.Thus this approach is used to achieve stronger security protection. The cluster head is required to collect the data from all the feeler node, which makes it overloaded all the time hence Sleep/awake scheduling algorithm is used which will avoid this problem.
Full Paper

IJCST/33/6/
A-1083
   232 Dynamically Reschedule the Trains in a Railway Traffic Network
Bala Ramesh Vanukuru, G. B. V. Padmanadh

Abstract

Railway is a very important mode of transportation for passengers and freight, due to its peculiar characteristics. Few other transportation modes combine dedicated infrastructure connecting point to point cities and places of interests, high operational speed, high reliability, cost effective operations, high energy efficiency and very high safety rate. The aim of railway traffic control is to ensure safety, regularity, reliability of service and punctuality of train operations. Railway business strongly needs to improve the quality of service and to accommodate growth while reducing the costs. The punctuality analysis represents an important measure of rail operation performance and is often used as standard performance indicator. The development of new strategies for railway traffic control experienced an increasing interest due to the expected growth of traffic demand and to the limited possibilities of enhancing the infrastructure, which increase the needs for efficient use of resources and the pressure on traffic controllers. Improving the efficiency requires advanced decision support tools that accurately monitor the current train positions and dynamics, and other operating conditions, predict the potential conflicts and reschedule trains in real-time such that consecutive delays are minimized. We design a model predictive controller based on measurements of the actual train positions. The core of the model predictive control approach is the railway traffic model, for which a switching max-plus linear system is proposed. If the model is affine in the controls, the optimization problem can be recast as a mixed-integer linear programming problem. To this end we present a permutation-based algorithm to model the rescheduling of trains running on the same track
Full Paper

IJCST/33/6/
A-1084
   233 Spectrum-Based Scheme For Detect C-Worm Propagation
M. N. V. Kiran Babu, P. Nageswara Rao, K. Nageswara Rao

Abstract

worms create security threats to the Internet. worms are propagate in an automated fashion continuously and compromise computers on the Internet.. In this project we are detecting a new type of worm named as Camouflaging Worm. It cleverly manipulate its scan traffic volume over time. We analyze kind of the C-Worm and conduct a broad comparison between its traffic and non-worm traffic .finally We observe that these two types of traffic are barely distinguishable in the time domain. Its distinction is clear in the frequency domain. we design a novel spectrum-based scheme to detect the C-Worm. Our scheme uses the Power Spectral Density (PSD) distribution of the scan traffic volume and its corresponding Spectral Flatness Measure (SFM) to differentiate the C-Worm traffic from background traffic. our scheme can effectively detect the C-Worm propagation as well as traditional worms.
Full Paper

IJCST/33/6/
A-1085
   234 Efficiently Detecting the Leakages in Data
Syed Sajida Bhanu, Deevi Hari Krishna

Abstract

The Data Leakage problem can be defined as any unauthorized access of data due to an improper implementation or inadequacy of a technology, process or a policy. The “unauthorized access” described above can be the result of a malicious, intentional, inadvertent data leakage, or a bad business/technology process from an internal or external user. Traditionally, this leakage of data is handled by water marking technique which requires modification of data. If the watermarked copy is found at some unauthorized site then distributor can claim his ownership. To overcome the disadvantages of using watermark [2], data allocation strategies are used to improve the probability of identifying guilty third parties. In this project, we implement and analyze a guilt model that detects the agents using allocation strategies without modifying the original data. The guilty agent is one who leaks a portion of distributed data. The idea is to distribute the data intelligently to agents based on sample data request and explicit data request in order to improve the chance of detecting the guilty agents. The algorithms implemented using fake objects will improve the distributor chance of detecting guilty agents. It is observed that by minimizing the sum objective the chance of detecting guilty agents will increase. We also developed a framework for generating fake objects.
Full Paper

IJCST/33/6/
A-1086
   235 Discovering Business Rules Using MSCM Method
Gujji Srinivas Reddy, Nanna Babu. Palla, Pradeep Kumar. Kanakala

Abstract

Business rules are the rules, that represent output specific to business problem .i.e what we should do for the particular application. Actionable knowledge Discovery (AKD) is a closed optimization problem solving process from problem definition, model design to actionable pattern discovery, and is designed to deliver apt business rules that can be integrated with business processes and technical aspects. To support such processes, we correspondingly propose, formalize and illustrate a generic AAR model design: Multisource Combined-Mining-based (MSCM. In this paper, we present a view of actionable knowledge discovery (AKD) from the technical and decision-making perspectives. A real-life case study of MSCM-based AAR is demonstrated to extract debt prevention patterns from social security data. Substantial experiments show that the proposed model design are sufficiently general, flexible and practical to tackle many complex problems and applications by extracting actionable deliverables for instant decision making.
Full Paper

IJCST/33/6/
A-1087
   236 Solving Optimal Bandwidth Assignment Problem using Polynomial Reduction into Subset Sum Problem
MVV Choudhary, GBV Padmanadh

Abstract

One of the ways communication can be characterized is by the number of receivers targeted by a sender. Streaming video can be used for live or recorded events. Live or on-demand streaming is a time critical Multiple description coding (MDC) can be used to address this bandwidth heterogeneity issue. MDC has been widely used in media streaming to address the bandwidth heterogeneity issue. That is, the video source encodes data into multiple descriptions. At the receiver end, the streaming quality is proportional to the number of descriptions received. This Paper gives problem of optimal bandwidth assignment in ondemand streaming and how to optimally assign description bandwidth for MDC for video streaming to large group, so that their heterogeneous bandwidth requirements can be best satisfied. The optimum bandwidth allocation problem is an optimization problem and propose algorithms to address it and bandwidth optimization problem is NP- hard, by finding a polynomial reduction to the problem from the subset sum problem. Simulation results shows that achieves much better user satisfaction than other assignment methods and closely matches the optimum based on exhaustive search.
Full Paper

IJCST/33/6/
A-1088
   237 Reducing Stabilizer Curve in Steganography using Condition-Framework Code
Kumar Vasantha, Sekhar Naidu Yandrapu

Abstract

Most realistic steganographic algorithms for experimental covers embed messages by minimizing a sum of per-pixel distortions. Current near-optimal codes for this minimization problem are limited to a binary embedding operation. In this paper, we extend this work to embedding operations of larger cardinality. The need for embedding changes of larger amplitude and the merit of this construction are confirmed experimentally by implementing an adaptive embedding algorithm for digital images and comparing its security to other schemes. This paper proposes a complete practical method for minimizing additive distortion in steganography with general embedding operation. Let every possible value of every stage element be assigned a scalar expressing the distortion of an embedding change done by replacing the cover element by this value. The total distortion is assumed to be a sum of per-element distortions. Both the payload-limited sender (minimizing the total distortion while embedding a fixed payload) and the distortionlimited sender (maximizing the payload while introducing a fixed total distortion) are considered. Without any loss of performance, the nonbinary case is decomposed into several binary cases by replacing individual bits in cover elements. The binary case is approached using a novel Condition-coding scheme based on dual convolution codes equipped with the Viterbi algorithm. Most current coding schemes used in steganography (matrix embedding, wet paper codes, etc.) and many new ones can be implemented using this framework. We report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel. Practical merit of this approach is validated by constructing and testing adaptive embedding schemes for digital images in raster and transform domains.
Full Paper

IJCST/33/6/
A-1089
   238 Scalable Approach for Mesh Network Propagation
R. Vidruma, Deevi Hari Krishna

Abstract

Mesh networking (topology) is a type of networking where each node must not only capture and disseminate its own data, but also serve as a relay for other nodes, that is, it must collaborate to propagate the data in the network. A mesh network can be designed using a flooding technique or a routing technique. When using a routing technique, the message propagates along a path, by hopping from node to node until the destination is reached. To ensure all its paths’ availability, a routing network must allow for continuous connections and reconfiguration around broken or blocked paths, using self-healing algorithms. A mesh network whose nodes are all connected to each other is a fully connected network. Mesh networks can be seen as one type of ad hoc network. Mobile ad hoc networks (MANET) and mesh networks are therefore closely related, but MANET also have to deal with the problems introduced by the mobility of the nodes. We present a scalable approach for dissemination that exploits all the shortest paths between a pair of nodes and improves the QoS. Despite the presence of multiple shortest paths in a system, we show that these paths cannot be exploited by spreading the messages over the paths in a simple round-robin manner; nodes along one of these paths will always handle more messages than the nodes along the other paths. We characterize the set of shortest paths between a pair of nodes in regular mesh topologies and derive rules, using this characterization, to effectively spread the messages over all the available paths. These rules ensure that all the nodes that are at the same distance from the source handle roughly the same number of messages. By modelling the multihop propagation in the mesh topology as a multistage queuing network, we present simulation results from a variety of scenarios that include link failures and propagation irregularities to reflect real-world characteristics. Our method achieves improved QoS in all these scenarios.
Full Paper

IJCST/33/6/
A-1090
   239 Software Defect Prediction from Historical Software Data
M. Krishna Veni, Y. Ramu

Abstract

Software Testing Consumes major percentage of project cost, so researchers focuses of the “How to minimizes cost of testing in order to minimize the cost of the project”. The Software defect prediction is a method which predict defects from historical database. Data mining Techniques are used to predict Software defects from historical databases. This paper describes frame work to produce software defect from the historical database and also present one pass data mining algorithm used find rules to predict software defects. The experimental results shows that, one pass algorithm generate rules for software defect prediction with consider amount of time and with better performance.
Full Paper

IJCST/33/6/
A-1091
   240 Design and Implementation of Static Random Access Memory Cell
Reshmi Maity, Niladri Pratap Maity

Abstract

In this paper the basic building block of a Static Random Access Memory (SRAM) has been designed using Very high speed integrated circuit Hardware Description Language (VHDL) structural architecture with the approach of having the power consumption very low so they are perfect for use as cache memory in computers and other applications. The design is simulated using Xilinx VHDL and implemented in Virtex-V Field Programmable Gate Array (FPGA). The SRAM 1 bit cell has two addresses x and y (2 dimensional addressing mode), a data input din, a read/write (rw) input which when asserted low writes into the cell and when asserted high reads from the cell a data output dout. During write operation, the output of the cell remains unchanged from previous state and during read operation, the previous input is achieved as dout. The above design is advantageous as it consumes very low power during its operation due to the presence of the latches.
Full Paper

IJCST/33/6/
A-1092
   241 Analysis on Delay Optimal Policy using Novel Hierarchical Routing With Self Organization in Multi-Hop Wireless Networks
P. Gopinadh, T. Y. Srinivasa Rao, Dr. P. Harini

Abstract

Multi hop or Ad hoc, wireless networks use two or more wireless hops to convey information from source to destination. In this paper we analysis delay of multi hop network by developing a new queue grouping technique to handle the complex correlations of the service process resulting from the multi-hop nature of the flows and their mutual sharing of the wireless medium. For the tandem queue network, where the delay optimal policy is known (DOP), the expected delay of the optimal policy numerically coincides with the lower bound. The lower bound analysis provides useful causes into the design and analysis of optimal or nearly optimal scheduling policies. We conduct extensive numerical studies to demonstrate that one can design policies whose average delay performance is close to the lower bound computed by the techniques presented by proposing a novel hierarchical routing protocol (NHR), which addresses network self organization (SON) and redundancy issues, is introduced. Initial analysis shows promising results of the proposed protocol over Multi Hop.
Full Paper

IJCST/33/6/
A-1093
   242 Performance Analysis of Various Encryption Algorithms for Data Communication
S. Padmapriya, S. Saravanapriya, D. Jayachitra

Abstract

Communication without security is not reliable in today’s networking world. Encryption algorithms provide the security
to the users in the network. One of the tasks of the cryptosystem is to analyse the merits and demerits and select the algorithms that best address the problem to be solved. On the other side those algorithms consume a significant amount of computing resources such as CPU time and memory. This paper provides evaluation of four of the most common encryption algorithms. A comparison has been conducted for those encryption algorithms at different settings for each algorithm such as CPU time, memory and avalanche effect.
Full Paper

IJCST/33/6/
A-1094
   243 Efficient Data Privacy Scheme for WSNs
Madhuri. S, L. Sumalatha

Abstract

Actually the full network level privacy is categorized into four sub parts: Identity privacy, route privacy, location privacy and data privacy, we need to provide privacy in the four categories. Achieving complete network level privacy is a cumbersome task due to the constraints imposed by the sensor nodes, sensor networks and Quality of issues. In this paper, mainly concentrated on the data privacy for WSNs. Therefore I proposed encryption and random number generation techniques to provide efficient data privacy for WSNs. For that I am using the RSA algorithm which addresses those techniques. Through this RSA algorithm we can provide effective computation cost that is required to perform encryption and random number generation.
Full Paper

IJCST/33/6/
A-1095