Information Security

» SPSU Home / Information Security / Technical Reports

Information Security

SPSU Center for Information
Security Education
Building J
Suite 387
1100 South Marietta Pkwy
Marietta, GA 30060

Svetlana Peltsverger, Ph.D.
Director
speltsve@spsu.edu
678-915-4285

Technical Reports

Technical Report Series

The CISE-CSE Tech Report are papers published by researchers associated with the Center for Information Security Education (CISE) and School of Computing and Software Engineering (CSE).

Reports are available in Acrobat PDF (.pdf) format, or in some cases, only abstracts are provided. Acrobat documents are viewable and printable using Adobe's free Acrobat Reader.


[2009] - - [2008] - - [2007] - - [2006] - - [2005] - - [2004]

CISE-CSE Technical Reports: 2009

  1. CISE-CSE-09-01 Software Security Vulnerability vs Software Coupling: A Study with Empirical Evidence
  2. CISE-CSE-09-02
  3. CISE-CSE-09-03
  4. CISE-CSE-09-04
  5. CISE-CSE-09-05
  6. CISE-CSE-09-06
  7. CISE-CSE-09-07

CISE-CSE-09-01

Software Security Vulnerability vs Software Coupling: A Study with Empirical Evidence

Varadachari Sudan Ayanam

Supervisor: Dr. Frank Tsui

Abstract

Software Security has gained popularity and it is one of the buzzword in the field of Information Technology in the recent days. Software Coupling is another attribute that is of interest primarily to software developers, managers. Software Vulnerability refers to weakness or flaw in software that could be exploited to violate the system’s security.

In this Thesis the basic concepts and definitions of Software Security, Vulnerability, and Coupling will be discussed to lay out the theoretical foundation. It will then put forth the hypothesis that Software Coupling is a factor influencing Software Vulnerability. Later empirical data analysis will be done to prove the hypothesis that Software Coupling can influence Software Vulnerability. The empirical study is done using Mozilla’s source code as it is freely available to download and also it is one of the most popular internet suites. As a part of empirical analysis we put forth couple of security metrics that may be used to predict vulnerabilities in software.

  


  

CISE-CSE Technical Reports: 2008
  1. CISE-CSE-08-01 A Test Complexity Metric Based on Fataflow Testing Technique
  2. CISE-CSE-08-02 An Empirical Study on Software Test-Case Development Complexity and Software Code Cohesion
  3. CISE-CSE-08-03 Maintaining State in JAX-WS Web Services
  4. CISE-CSE-08-04 Race Condition in Ajax-based Web Applications
  5. CISE-CSE-08-05 Validating Tools for Cell Phone Forensics
  6. CISE-CSE-08-06 Security Typing and Metrics for SOA
  7. CISE-CSE-08-07 Applying Semantic Technologies to Information Security

CISE-CSE-08-01

A Test Complexity Metric Based on Dataflow Testing Technique

Frank Tsui
Orlando Karam
Stanley Iriele

Abstract

This research report covers only a part of a broader set of research activities, in software design, implementation, and testing complexities, which is conducted by software engineering faculty and students at Southern Polytechnic State University.

In this internal report we first discuss the general notion of sub-activities within software testing. We then focus on one of the testing sub-activity, test-case development. More specifically, the development of test cases which uses data definition and usage or D-U path testing technique [8] is considered. A complexity metric, called TCD-Complexity, for test-case development based on D-U path testing is then proposed. Finally, an example of how to measure the test-case development complexity using the newly proposed TCD-Complexity metric is shown.


CISE-CSE-08-02

An Empirical Study on Software Test-Case Development Complexity and Software Code Cohesion

Stanley Iriele
Frank Tsui
Orlando Karam

Abstract

This is a report on intermediate results obtained in our research studies in OO software cohesion and software test-case development complexity. There are numerous studies conducted on OO software cohesion [ 2,3,4,5,6,7,11]. It is believed that higher cohesive software results in better quality [4]. It is also believed that good testing results in better quality software [1,8]. In this research, we look for relationships between the attribute of cohesion and the attribute of test complexity. While it is hoped that the more cohesive the code is, the easier it would be to test that code, it is not clear that there is a close relationship between these two attributes. We studied a small ATM-processing application software which is made of nine OO Classes. The nine Classes further contained forty one Methods. Our research focuses on relating two specific OO cohesion metrics, LCOM5 [6] and ITRA-C [11], to a test-case development complexity metric, TCD-Complexity [12]. The preliminary results show some promising relationship that may lead us to cautiously use ITRA-C as a potential indicator for testing complexity and testing effort in terms of the TCD-Complexity metrics.

CISE-CSE-08-03

Maintaining state in JAX-WS Web Services

Kai Qian

Abstract

The purpose of this paper is to examine various options for maintaining state in web services. It will look at JAX-WS web services in particular to demonstrate the methods in order to compare and contrast them according to their strengths and weaknesses.

CISE-CSE-08-04

Race Condition in Ajax-based Web Application

Kai Qian

Abstract

The atomicity is an important issue in asynchronous based data communication. In the modern web application today, asynchronous introduces hazardous effect causing unexpected results. This paper discusses the race condition occurred between the user request and server response due to the asynchronous nature of the web application using Ajax. A race condition occurs when multiple threads in a process try to modify the critical section data at the same time. The data will depend on which thread arrived last. Concurrent requests will be running asynchronously and it is impossible to predict which will return first. The locking mechanism is not a very effective way but may avoid race condition. Our future project develop a more effective way to detect the race conditions while parsing.

CISE-CSE-08-05

Validating Tools for Cell Phone Forensics

Neil Bhadsavle
Andy Wang

Abstract

As mobile devices grow in popularity and ubiquity in everyday life, they are often involved in digital crimes and digital investigation as well. Cell phones, for instance, are becoming a media or tool in criminal cases and corporate investigation. Cellular phone forensics is therefore important for law enforcement and private investigators. Cell phone forensics aims at acquiring and analyzing data in the cellular phone, which is similar to computer forensics. However, the forensic tools for cell phones are quite different from those for personal computers. One of the challenges is in this area is the lack of a validation procedure for forensic tools, in order to determine their effectiveness. This paper presents our preliminary research in creating a baseline for testing forensic tools. This research was accomplished by populating test data onto a cell phone (either manually or with an Identity Module Programmer) and then various tools effectiveness will be determined by the percentage of that test data retrieved. This study will shed light and inspire on further research in this field. This research could be expanded further in several ways: First, while we were using a locked T-Mobile standard SIM card thus the amount of change that can be done is limited, a test SIM card or a Smart card which is unlocked will provide for a greater range of area for data to be written. Second, a SIM card writer or a identity module programmer for direct writing onto a SIM card would also allow for population for a greater range of element files. Third, open source SIM card writers or identity module programmers and SIM card readers would be more ideal for reading/obtaining data and writing data so researchers have the ability to look at as well as modify code.

CISE-CSE-08-06

Security Typing and Metrics for SOA

Frank Tsui, Andy Wang, and Kai Qian

Abstract

With the popularity of Service Oriented Architecture, SOA [ Glass ], the expectation of application software growth is increasing. SOA is less of a new technical architecture but is, rather, more of a new business paradigm for building application software from a collection of available services. Many facilities and resources such as Web Service Definition Language and Business Process Execution Language are available under the banner of SOA. Some of the traditional characteristics of software are still, if not more, relevant under this new paradigm. These major attributes include quality, security, maintainability, configurability and a long list of “ities.”

Any one of these service attributes or characteristics may be viewed as a property of the service. The attribute may be represented by a type and a corresponding set of values that defines that type. For example, the security attribute may be viewed through a security type, which is defined by a set of values that the security type may be assigned. The service performance characteristic may also be defined through a performance type, which is in turn made explicit by listing the set of values the performance type may be assigned. In software engineering we have learned that there is no one universal measurement that can characterize a piece of software. Similarly, we would most likely need to use multiple metrics to characterize a computing service in SOA. In this paper, we will focus our effort on the security attribute of a software service. Security has been studied extensively using different models. One popular avenue of endeavor is the access models such as RBAC[Sandhu; Finn] ] and RT[Li&Mitchell]. Here, we will explore security as an attribute through typing. In a sense, typing is an implementation mechanism for the abstract attribute of software or service. While there have been considerations of attribute based access control and policy management [Bandhakavi et al; Swamy et al], the study has been limited to designing of and an explication of specific languages for RT.

We will first define the general concept of attribute based security typing. An important component of this definition is the set of security values, V. The definition of the set V would be shown to be the key to the determination of measurement scale [ Fenton & Pfleeger]. Then various operations under this concept are explored. The paper is written with a specific security type, SST, as the example to illustrate the various perspectives of a security type which then may, in turn, be used as a basis for designing security-typed language or security-criteria based systems.


CISE-CSE-08-07

Applying Semantic Technologies to Information Security

Andy Wang

Abstract

Stimulated by web services, smart agents, and semantic web research and development, semantic technologies have recently enjoyed massive R&D expenditure in both academia and industry as well. This report discusses the rationale of applying semantic technology to information security. A research agenda is proposed along with several potential opportunities and research topics in this area.

CISE-CSE Technical Reports: 2007
CISE-CSE Technical Reports: 2006
CISE-CSE Technical Reports: 2005
CISE-CSE Technical Reports: 2004

Knowledge Management and Organizational Learning in Higher Education

By Donna R. Hutcheson
December, 2004

Business organizations have embraced Knowledge Management (KM) for the last several years as a way to remain competitive and viable in an ever-changing global marketplace. Higher Education has begun to move toward a more formal knowledge management structure.

The author examined the relationship between KM and organizational culture and the how organizational learning might influence the KM endeavor. This thesis considers Kennesaw State University (KSU), and how the concepts of KM are being addressed. The Chief Information Officer at KSU uses KM and organizational learning as the basis for organizational change and decision-making within his unit. This methodology has proven to provide a solid foundation on which to make decisions and document organizational memory. Assessment results and recommendations for change should be made widely and easily accessible to constituents as evidence of continuous improvement for accreditation procedures, national association activities, governance structures, quality improvement initiatives, and program review. It is the author's hope that the reader of this thesis will have a better understanding of KM and organizational learning in general, but especially as these concepts apply to higher education.

Modeling and Simulation of Performance of Ieee 802.11 Wireless-LAN and Bluetooth Piconet

By Abdul- Lateef Yussiff
Dr. Patrick Bobbie, Advisor
August, 2004

The widespread use of wide-range Wireless-based Local Area Network (WLAN) and its counterpart, the short-range Bluetooth piconet, has put a tremendous pressure on designers of wireless protocols to assure fidelity and reliability within the freely available 2.4 GHz Industrial Scientific and Medical (ISM) bandwidth. The proximity of wireless devices with one another often results in interference in the unlicensed 2.4 GHz ISM bandwidth. Interference has been recognized as a major problem in estimating signal power at a particular distance from the transmitter for wireless network performance and improvement. Also due to stochastic behavior of the wireless medium, interference has become a major problem in formulating a general mathematical model for radio channel estimation. Research techniques often combine empirical and analytical techniques to model radio channels.

The thesis focuses on developing an Interference Range Model (IRM), a set of equations for determining the acceptable range of interference in a given environment. Interference is directly related to the transmission signal power-a high signal power increases the probability of interference among signals using the same frequency band. Signal power is the signal strength at a particular point in time. Related to interference is path loss - the rate at which the transmitted signal power decreases with distance. The estimation of path loss is therefore critical since it embraces the signal power and acceptable distance of separation or range of interference. Also path loss is a function of different environments, and its estimation depends on the environment the signal is passing through. To include the effect of environments, we included different environmental conditions in the IRM for the analysis of path loss. A simulation tool set was developed in a Java environment for modeling, simulating, and analyzing path loss and acceptable/cutoff ranges of interference based on the IRM.

Data Warehouse: Business Intelligence Architecture

By Vaisheshi Jalajam
August, 2003

Businesses both large and small have been accumulating data about customers, about products, services, sales, discounts and supply-chain facilitation etc. so as to generate meaningful information and re-use it profitably. Information systems have worked with various issues concerning data such as efficient data storage, data consistency, reducing redundancy, and handling large volumes of data. However, analyzing data has always been challenged by various factors such as dispersed location of data, diversity of data formats etc. Data warehouse methodology has provided a solution to the challenges imposed by the availability of data for analytical purposes. As an environment a data warehouse has in itself the ability to extract data, transform it and provide a framework for analytical reasoning.

I decided to explore this environment, trace its best prospects and its weaknesses. In this process, I realized the success of data warehousing depended on its unique data modeling structure known as dimensional modeling. Dimensional modeling methodology visualized and modeled the data in terms of business dimensions. This model worked on the constraints that the Entity Relationship model imposed. Therefore, this thesis attempts to distinguish and highlight the different areas that these most popular data modeling techniques have to offer. It is with this twin objective that I began exploring this field:

  1. Have an integrated view of data warehouse model
  2. Distinguish Entity Relationship modeling and dimensional modeling

This work presents the complete overview of data warehouse environment. It discusses the differences between the traditional transactional processing and the new dynamic multi-dimensional analytical processing with a comparison of entity relationship approach and the dimensional approach. This work also discusses the two popular reporting techniques known as Online Analytical Processing and Data mining.

The work can be divided into three distinct parts:

  1. History and evolution of information systems leading to data warehouses
  2. Life cycle of data warehouse
  3. A case for dimensional modeling methodology for analytical processing of data.

This work began with a hypothesis that data warehouse environment is a critical for today's analytical processing needs and secondly, it believes that dimensional model is more suitable for a data warehouse rather than an entity relationship model.

The study revealed that data warehouse in fact is growing in popularity in all sectors of industries and that most data warehouses are built on the dimensional model.

Comments and suggestions: Please contact 678-915-4292 or jwang@spsu.edu.

                                                                                                                                                                                                                                                                                                                                         
Facebook Twitter YouTube
 ©