IJCST Logo

International Journal of Computer Science and Technology
Vol 7.4 Ver -1 (Oct-Dec 2016)

S.No. Research Topic Paper ID Download
01

Proposition and Implementation of PNS Algorithm With Data Compression

Harsh Khemka

Abstract

Cryptography is the secured technique for data communication in non-human readable text format between two parties in the presence of third party. The algorithm is different from other cryptographic algorithms since the main idea behind this algorithm is that any number can be viewed as the Pth number of N digit having the single digit sum as S. Therefore, the algorithm is said as PNS (Position-Number of Digit-Single sum). This algorithm can be used for encryption-decryption on any human-readable text therefore is a useful mechanism for data security via cryptography. I have effectively used data compression on it to provide better security.
Full Paper

IJCST/74/1/A-0737
02

Effective Novel Method for Handling Non Spatial Attributes and Minimizes Query Response Time

M. Krishna, A. Lakshman Rao, G. Vijay Kumar

Abstract

Now a days users are issuing spatial queries like range and nearest neighbor search contains the multidimensional objects named as hotels, ATM and hospitals. But present user query forms is changed due to requirements for example instead of hotel user is asking to retrieve the hotels having his interested food menu. To process this types queries IR-tree as well as spatial inverted index is used. These techniques can’t handles the more user’s requests due to increases the volume of smartphones and tablets. It can’t scale up the services.in this project proposing a novel technique named as Merkle Skyline R-tree method and outsourcing the data management service to cloud service providers. Existing techniques can’t handle the non-spatial attributes like service level, food quality and price. The proposed Merkle Skyline R-tree method provides efficient services to clients.
Full Paper

IJCST/74/1/A-0738
03

Improving Software Reliability, Productivity and Quality Using Software Metrics

Harminder Pal Singh Dhami

Abstract

The Software engineering is an application of a technical, systematic, scientific approach to the development, function, and maintenance of software; that is, the application of engineering to software. Previously software measurements are limited to measuring individual, product software attributes. In system development, metric is the measurement of a particular characteristic of a program’s performance or efficiency and which can help to enhance product reliability. It is used by the software industry to quantify the development, operation & maintenance of software and provide information to support quantitative managerial decision-making during the development lifecycle of the software. Software metrics is a term which is used to describe wide range of activities concerned with measurement in software engineering. This module introduces the most commonly used software metrics and reviews their use in building models of the software development process. Most software metrics activities are initiated for the purposes of risk analysis and software reliability of some form or another. Yet traditional metrics approaches, such as regression-based models for cost estimation and defects prediction, provide little support for managers wishing to use measurement to analyze and minimize risk. Many traditional approaches are not only insufficient in this respect but also fundamentally inconsistent. Considerable improvements can be achieved by using causal models that require no new metrics.
Full Paper

IJCST/74/1/A-0739
04

Novel Schemes for Spatial Top-K Query Processing for Vulnerable Location Based Service Providers

K.Srikanth, M.Rajakumar, A.Lakshman Rao

Abstract
This paper presents a novel distributed framework for collaborative location based information generation and sharing that becomes popular due to internet ability and position aware mobile devices. This framework contains Location Based Service Providers (LBSPs), Information Collector, Information Contributors and framework users. The information collector gets the information about point of interest (POI) from information contributors, while Information collector sells the data sets to the LBSPs. The LBSPs allow the users thus providing top-k queries which are requested to ask for the POIs in particular region information with highest top k rating for POI’s trait. Assuming LBSP’s may still change the data sets from data collector and provide basic top k query results in favor of POIs willing to pay. In this paper, there are three novel schemes are used for clients which are easily identified fake spatial queries and moving top k query results thereby utilizing proposed framework.
Full Paper

IJCST/74/1/A-0740
05

A Framework for Data Migration between Various Types of Relational Database Management Systems

Iqbaldeep Kaur, Navneet Kaur, Tanisha, Gurmeen, Deepi

Abstract
Data Migration is an important event for an organization to be database. competitive in today’s business world. However, it is a critical process that directly influence the quality of data. Data is considered as a critical asset to continue organization’s business operations. Therefore, this paper proposes a framework to improve quality of data during data migration between different types of Relational Database Management System (RDBMS).
Full Paper

IJCST/74/1/A-0741
06

A Study and Analysis on Wireless Network System

Iqbaldeep Kaur, Navneet Kaur, Amandeep Ummat, Jaspreet Kaur, Navjot Kaur

Abstract
As technology advances in society the need for wired and wireless networking has become essential. Wireless networking takes into consideration the range, mobility and the several types of hardware components needed to establish it. In the present paper a detailed study of various types of wireless links are explained. Applications of wireless types are also provided.
Full Paper

IJCST/74/1/A-0742
07

An Enhanced Frequent Pattern Analysis Technique from the Web Log Data

Iqbaldeep Kaur, Navneet Kaur, Nafiza Mann, Isha Vats

Abstract
To improve user experience while accessing the, website. Web usage mining is used to evaluate user’s previous experiences, which helps to improve functionality of that website. In this paper a technique for web usage mining is proposed, which extends features of synaptic search and Frequent Pattern Growth algorithm. Proposed technique uses synaptic search property to search data on web on the basis of location and uses FP growth algorithm to generate results.
Full Paper

IJCST/74/1/A-0743
08

Research Paper on Object Oriented Software Engineering

Iqbaldeep Kaur, Navneet Kaur, Amandeep Ummat, Jaspreet Kaur, Navjot Kaur

Abstract
By the development of the software industry and the advances of the software engineering, the use of Object Oriented Software Engineering (OOSE) has increased in the software complex real world. The origin of the OOSE in evaluation and design of the software has expanded much and is now considered as one of the software integration processes. The OOSE is combination of Object Oriented Analysis (OOA) models, Object Oriented Design (OOD) and the Object Oriented Programming (OOP) which provide a powerful way for development of the software. The OOSE provides the possibility of OOP on the development and production of the software after the analysis and designing the software. In the paper, we study the general terms and issues which effects software development in industry.
Full Paper

IJCST/74/1/A-0744
09

Automated Identification of Hard Exudates and Cotton Wool Spots using Biomedical Image Processing

Iqbaldeep Kaur, Navneet Kaur, Tanisha, Gurmeen, Deepi

Abstract
The automatic identification of Image processing techniques for abnormalities in retinal images. Its very importance in diabetic retinopathy screening. Manual annotations of retinal images are rare and exclusive to obtain. The ophthalmoscope used direct analysis is a small and portable apparatus contained of a light source and a set of lenses view the retina. The existence of diabetic retinopathy detected can be examining the retina for its individual features. The first presence of diabetic retinopathy is the form of Microaneurysms. This research paper describes different works needed to the automatic identification of hard exudates and cotton wool spots in retinal images for diabetic retinopathy detection and support vector machine (SVM) for classifying images. This system is evaluated on a large dataset containing 129 retinal images.The proposed method Results show that exudates were detected from a database with 96.9% sensitivity, specificity 96.1% and 97.38%accuracy
Full Paper

IJCST/74/1/A-0745
10

Automatic Speech Recognition: A Review

Iqbaldeep Kaur, Navneet Kaur, Amandeep Ummat, Jaspreet Kaur, Navjot Kaur

Abstract
This research study aims to present automatic speech recognition system and discuss the major themes and advances made in the past 60 years of research, so as to provide a technological perspective and an appreciation of the fundamental progress that has been accomplished in this important area of speech communication. After years of research and development the accuracy of automatic speech recognition remains one of the important research challenges. The design of Speech Recognition system requires careful attentions to the following: Definition of various types of speech classes, speech recognition process, ASR design issues, and speech recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of speech recognition system and identify research topic and applications which are at the forefront of this exciting and challenging field.
Full Paper

IJCST/74/1/A-0746
11

Research Paper on Big Data and Hadoop

Iqbaldeep Kaur, Navneet Kaur, Amandeep Ummat, Jaspreet Kaur, Navjot Kaur

Abstract
‘Big Data’ describes techniques and technologies to store, distribute, manage and analyze large-sized datasets with high-velocity. Big data can be structured, unstructured or semi-structured, resulting in incapability of conventional data management methods. Data is generated from various different sources and can arrive in the system at various rates. In order to process these large amounts of data in an inexpensive and efficient way, parallelism is used.. Hadoop is the core platform for structuring Big Data, and solves the problem of making it useful for analytics purposes. Hadoop is an open source software project that enables the distributed processing of large data sets with a very high degree of fault tolerance.
Full Paper

IJCST/74/1/A-0747
12

Big Data Management: Characteristics, Challenges and Solutions

Iqbaldeep Kaur, Navneet Kaur, Tanisha, Gurmeen, Deepi

Abstract
Day by day there comes a new technology, devices and communication means which give rise to the rapid growth of data. Now days, data is enormously increasing within every ten minutes and it is hard to manage it and it gives rise to the term BIG DATA. This paper describes the big data and its challenges along with the technologies required to handle big data.
Full Paper

IJCST/74/1/A-0748
13

Biometric Authentication in Computer Security

Iqbaldeep Kaur, Navneet Kaur, Tanisha, Gurmeen, Deepi

Abstract
Biometric recognition refers to an automatic recognition of individuals based on feature vector(s) derived from their physiological and/or behavioural characteristic. Biometric recognition systems should provide a reliable personal recognition schemes to either confirm or determine the identity of an individual. Applications of such a system include computer systems security, secure electronic banking, mobile phones, credit cards, secure access to buildings, health and social services. By using biometrics a person could be identified based on “who she/he is” rather than “what she/he has” (card, token, key) or “what she/he knows” (password, PIN). In this paper, a brief overview of biometric methods, both unimodal and multimodal, and their advantages and disadvantages, will be presented.
Full Paper

IJCST/74/1/A-0749
14

Challenges and Issues in Adhoc Network

Iqbaldeep Kaur, Nafiza Mann, Bhushan, Bharat Verma, Gurbaj

Abstract
The Wireless Ad-hoc Networks do not have gateway, every node can act as the gateway. Although since 1990s’, lots of research has been done on this particular field, it has often been questioned as to whether the architecture of Mobile Ad-hoc Networks is a fundamental flawed architecture. reason behind is that Mobile Ad-hoc Networks are almost never used in practice, almost every wireless network nodes communicate to base-station and access points instead of co-operating to forward packets hop-by-hop., we try to clarify the definition, architecture and the characters of MANET, as well as the main challenges of constructing the MANET. Although many works have been done to solve the problem, we will show in this paper that it is very difficult to solve these limitations which made the Mobile Ad-hoc Networks a flawed.
Full Paper

IJCST/74/1/A-0750
15

Overview of Cloud Computing

Iqbaldeep Kaur, Nafiza Mann, Bhushan, Bharat Verma, Gurbaj

Abstract
In this paper, the concept of cloud computing has been studied. In the first part brief introduction to the cloud computing has been given. It includes definition of cloud computing given by NIST and other scholars. In the next section the architecture and various service models as well as deployment models provided by cloud computing has been discussed. This is followed by the working of the cloud which includes the concept of virtualization. Then the light has been put on its major application areas. In the next part its advantages and disadvantages have been highlighted.
Full Paper

IJCST/74/1/A-0751
16

A Comparative Testing from UML Design using Activity Diagram

Iqbaldeep Kaur, Navneet Kaur, Nafiza Mann, Isha Vats

Abstract
UML diagrams present the graphical representation of the system. Model-driven testing not only helps in early identification of faults but also results in reducing the testing effort at the later stages of SDLC. This paper intends to identify and make a critical review of different techniques for test case generation using UML activity diagrams (UAD). System activity diagram is used to depict the different dynamic aspects of the system. UAD not only presents the sequential or concurrent activities but also presents the conditional and parallel activities. For this literature survey different aspects like test case generation, test automation, and test case prioritization & minimization using UAD has been explored. The analysis of the literature portrays that extensive literature exists regarding automation of the testing using various aspects of activity diagrams. Similarly, test cases prioritization has also been explored using the activity diagrams incorporating manual, automated and semi-automated techniques.
Full Paper

IJCST/74/1/A-0752
17

The Generalized Optimization on Scalable Constrained Spectral Clustering Methodology

K.Bindu Mounika, Dr.G V Satya Narayana

Abstract
An imperative type of earlier data in clustering comes in type of cannot link and must-link constraints. We introduce a speculation of the mainstream spectral clustering procedure which coordinatessuch constraints. Persuaded by the as of late proposed constrained spectral clustering for the unconstrained issue, our strategy
depends on a tight unwinding of the compelled standardized cut into a ceaseless streamlining issue. Inverse to every single other strategy which has been proposed for obliged spectral clustering, we can simply ensure to fulfill all constraints. Also, our delicate detailing permits to advance an exchange off between standardized
cut and the quantity of abused constraints. An effective execution is given which scales to substantial datasets. We beat reliably all other proposed strategies in the tests. The idea of clustering is broadly utilized as a part of different areas like bioinformatics, therapeutic information, imaging, advertising study and wrongdoing investigation. The well-known sorts of clustering methods are spectral, various leveled, spectral, thickness based,
blend displaying and so forth. Spectral clustering is a broadly utilized procedure for a large portion of the applications since it is computationally cheap. An examination of the different research works accessible on spectral clustering gives an understanding into the late issues in spectral clustering area.
Full Paper

IJCST/74/1/A-0753
18

The Assessment of a Text Document Clustering and Classification using Fuzzy C-Means Clustering Algorithm

Shaik Sharmila, Bodapati Prajna

Abstract
The similarity between documents are the new creative idea now days in data mining and data recovery. These incorporate basically supported hunt, question reformulation and picture recovery. Standard text comparability measures perform ineffectively due to data meager condition and the absence of context. Where Document preparing assumes a vital part in data mining, and web look. In text handling, pack of-words model is utilized. Measuring the closeness between records is a fundamental assignment in the report preparing and text classification. In this, another comparability measure is proposed. To quantify the similitude between records regarding a component, the proposed technique takes the accompanying cases: (a) The element must be in both documents, (b) the element that shows up in one and only archive, and (c) the element that shows up in none of the documents. For first case, closeness increments as the distinction between the records highlight values diminishes. For second, a settled
esteem is discover the closeness. For last case, the element has no commitment to closeness. The adequacy of measure is assessed on a few genuine data sets for record classification and clustering.
Full Paper

IJCST/74/1/A-0754
19

Protected Assessing and DE Duplication Data in Cloud

Danaboina Dhanalakshmi, A. Sudarsan Reddy

Abstract
Cloud computing, often referred to as simply “the cloud” is the delivery pooled resources as a backing of assorted customers through web in various models. The prominence of the place of cloud storage is become more crucial to everyday functioning of people. This is due to obvious advantage of data stockpiling. Nevertheless, since the data stockpiling is not fully trustworthy, it raises security concerns on how to secure data deduplication in outsource while protected data in cloud. In any case, the issue is extended volumes of data is consuming more storage. Data deduplication is novel technique which can remove Duplicate data. Regardless, earlier deduplication technique can’t reinforce differential endorsement duplicate data check. We display twin cloud blend of public and private cloud to reinforce more grounded security by encoding the record with differential advantage keys. Thusly, the customers without relating benefits can’t play out the duplicate check. In addition, such unapproved customers can’t interpret the cipher text even plot with the S-CSP. Finally our proposed model is secured.
Full Paper

IJCST/74/1/A-0755
20

Web Based Reality With Structure Sensor

Georgi Krastev, Valentina Voinohovska, Svetlozar Tsankov

Abstract
The article describes a 3D-sensor (scanner) of close (to 3.5 m) virtual and augmented reality Structure Sensor. The WEB based technologies for visualization of three-dimensional objects and especially WebGL and Three.js library of JavaScript are the point of discussion. An architectural template of design in programming Model-View-Controller or MVC, based on the separation of business logic from the GUI and data in an application, is attached to the paper. Attention is paid to photorealism in three-dimensional visualization and the results have been shown.
Full Paper

IJCST/74/1/A-0756
21

Furtive Rejection of Service Scheme in Cloud

G.Aparna, B.Rajesh

Abstract
Cloud computing, is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources. The accomplishment of the distributed computing paradigm is because of its on-interest, self-administration and pay-by-use nature. Cloud computing is not fully trustworthy; it raises security measures on resources. As indicated by this worldview the impacts of Denial of Service (DoS) attacks include the nature of the conveyed administration, as well as the administration support costs as far as asset utilization. In particular, the more drawn out the identification deferral is the higher the expenses to be caused. Accordingly, a specific consideration must be paid for stealthy DoS attacks. They go for minimizing their visibility and in the meantime, they can be as destructive as the beast power attacks. They are advanced assaults custom-made to influence the most pessimistic scenario execution of the objective framework through particular intermittent, beating and low-rate movement designs. In this paper, we propose a system to organize stealthy attack designs, which show a gradually expanding force pattern intended to deliver the most extreme money related expense to the cloud client, while regarding the occupation size and the administration entry rate forced by the location instruments. We portray both how to apply the proposed procedure and its consequences for the objective framework sent in the cloud.
Full Paper

IJCST/74/1/A-0757
22

The Advanced Local Binary Pattern (LBP) for the Tampering Fragile Watermarking Scheme

P. Jyothirmai, Dr. N.Supriya

Abstract
In this paper we portray a novel advanced image watermarking strategy utilizing Local Binary Patterns (LBP). Local binary patterns are known for their hearty surface portraying capacities and advanced watermarking utilized as a part of demonstrating the responsibility for sight and sound substance. In this work we propose a LBP union or backwards LBP coordinating procedure and its appropriateness to advanced image watermarking. LBP combination prepare changes to the area pixels values, so that the LBP processed from these pixels is the esteem we need to blend. This procedure considers the prerequisites of an advanced image watermarking, for example, indistinctness and vigor to watermark evacuation assaults. In view of the way of LBP combination it is required that exclusive couple of pixels of a given piece are adjusted to insert watermark. The recreation comes about demonstrate that the strategy is strong to JPEG pressure, revolution and scaling assaults. This LBP union process could likewise be utilized to watermark sensor information for demonstrating the proprietorship. We are sure that this work would prompt another exploration heading in verification of advanced substance.
Full Paper

IJCST/74/1/A-0758
23

Multimedia Augmented Reality With Picture Exchange
Communication System for Autism Spectrum Disorder

Taryadi, Ichwan Kurniawan

Abstract
Autism or Autism Spectrum Disorders is a pervasive developmental disorder caused disruptions in thinking, feeling, hearing, speech and social interaction. For this reason, children with autism need special training to improve their ability to learn new skills and knowledge. This work aims to propose a new training system using augmented reality for the training techniques of Picture Exchange Communication System (PECS). This system helps in teaching children about new pictures or objects together with related keywords or phrases that fit the way deep with fast interaction. Basically, the system is responsible for teaching, monitor and strengthen or ask for actions that support their children to learn and repeat the correct behavior. Hardware setup consists of a projector, which can convert all the planar surface into a display device and a camera to monitor the actions of children by tracking their hands. This arrangement provides feedback on the virtual desk with the actual movement of the child’s hand using hand visual detection and tracking algorithms.
Full Paper

IJCST/74/1/A-0759
24

The Self-Assured Data Sharing in among Group Members and Group Users

P Meethu Priya,M Purnachandra Rao

Abstract
Sharing group resource among cloud users is a noteworthy issue, so cloud computing gives a practical and productive organization. Because of regular change of enrollment, sharing data in a multiowner way to an untrusted cloud is still a testing issue. In this proposition a protected multi-owner data sharing plan, for element group in the cloud. By giving AES encryption while transferring the data, any cloud user can safely impart data to others. Then, the capacity overhead and encryption calculation cost of the plan are with the quantity of renounced users. Likewise, I investigate the security of this plan with thorough evidences. One-Time Password is one of the least demanding and most famous types of verification that can be utilized for securing access to accounts. One-Time Passwords are regularly alluded to as safe and more grounded types of verification, and permitting them to introduce over numerous machines. It gives different levels of security to share data among multi-owner way. Cloud computing now a day is expanding throughout the most recent couple of years because of its appealing elements like adaptability, adaptability, minimal effort and simple start up for the apprentices. It gives successful security of the data and data in the cloud storage. The data Distribution in numerous users getting to for element groups jelly data and its character and security from an untrusted cloud and allows access to incessant change of enrollment. The group manager can repudiate any number of users from the dynamic group. In any case, there is feasible for agreement when the repudiated user can attempt to get to the cloud data without the learning of the group manager. With a specific end goal to stop intrigue, this paper proposes an arrangement of mapping to make it conceivable. Fundamentally, a protected key circulation in a safe correspondence channel and the users can get the private key from the group manager.
Full Paper

IJCST/74/1/A-0760
25

Automatic Mapping of Graduates’ Skills to Industry Roles using Machine Learning Techniques: A Case Study of Software Engineering

Fullgence Mwachoo Mwakondo, Lawrence Muchemi, Elijah Isanda Omwenga

Abstract
The main focus is determine a machine learning model for mapping graduates’ skills to industry roles using skills profile of employed graduates. A hierarchical classification strategy using a bottom-up approach was designed based on a taxonomy that is bottom-up friendly and was applied to construct the model. Two machine learning techniques, naiveBayes and support vector machines, and software engineering employees’ profile dataset with 113 instances and 18 attributes were adopted in the investigation using experimental design. Experiments to evaluate the model were designed using pretest-posttest with control group. While the aim was to assess performance of the model under effect of various machine learning techniques and taxonomic structures, performance reported on carefully selected benchmark on bottomup multi-classification method was adopted for validation. Findings indicate model performance is not only considerably fair both under naïve Bayes (57.85%) and SVM (67.15%) but also slightly above the reported benchmark score of 61%. However, difference between the two model designs is significant (t=2.602, p=.029; t= -2.939, p=.017). In conclusion, automatic mapping of graduates’ skills to industry roles with the aim to improve employability and productivity prediction of new graduates must involve both a suitable machine learning technique and a bottomup friendly taxonomic structure.
Full Paper

IJCST/74/1/A-0761
26

Predictive Assessment of Learner’s Performance Using Decision Trees with Genetic Algorithms

S Neelima, Bodapati Prajna

Abstract
We recommend that the configuration and execution of compelling Social Learning Analytics present critical difficulties and open doors for both exploration and endeavor, in three imperative regards. The first is that the learning scene is exceptionally turbulent at present, in no little part because of mechanical drivers. Online social learning is rising as a huge wonder for an assortment of reasons, which we survey, so as to spur the idea of social learning. We finish up by returning to the drivers and inclines, and consider future situations that we may see unfurl as SLA devices and administrations full grown. Conceptual Data Mining is a famous information revelation procedure. In information mining decision trees are of the straightforward and capable basic leadership models. In this anticipate, an algorithm is proposed for predicting a learner’s performance utilizing decision trees and genetic algorithm, GDADT algorithm. Id3 algorithm is utilized to make numerous decision trees, each of which predicts the performance of an understudy in view of an alternate list of capabilities. Since every decision tree furnishes us with an understanding to the plausible performance of every understudy; and diverse trees give distinctive results, we are ready to foresee the performance as well as distinguish regions or elements that are in charge of the anticipated result. For higher precision of the acquired results, genetic algorithm is additionally joined. The genetic algorithm is executed on the n-ary trees, by computing the wellness of every tree and applying hybrid operations to acquire numerous eras, each adding to making trees with a superior wellness as the eras increment, lastly bringing about the decision tree with the best exactness. The outcomes so acquired are very reassuring.
Full Paper

IJCST/74/1/A-0762