Virginia Space Grant Consortium
Student Research Conference – April 11, 2006
Omni Hotel, Newport News, Virginia

Aerospace Graduate Research Fellows
Oral Presentations – Amphitheater


Vehicle Systems Technology
Dr. Robert Ash, Old Dominion University - Session Chair

2-D Computational Studies of Subsonic Axial Rotors Incorporating Dual Airfoils
Jonathan McGlumphy, Virginia Tech

A dual airfoil compressor rotor is a configuration where two separate blades are mounted on a common rotating wheel. The premise of this study is that a dual airfoil rotor can do more work at the same loss level as a single airfoil, thus reducing the number of required stages in a compressor. While dual airfoils are commonly used in centrifugal impellers, they have yet to be applied to a commercial axial-flow rotor. Here is presented some of the results of a 2-D, viscous computational study of the axial dual configuration in a fully subsonic flow field. It was found that the dual configuration offers benefits over a conventional airfoil when highly loaded (i.e. Lieblein D-Factor greater than 0.55). Future work will include completion of the 2-D computations as well as a rigorous wind tunnel experimental program for validation.

Robust Design of a Low-Boom Supersonic Aircraft
Daniel Le, University of Virginia

The deployment of supersonic aircraft over land has been limited by the intensity of the sonic boom generated during flight. Shock propagation has been studied extensively by Whitman, Walken, and many others. The audible sonic boom is a result of the shock and expansion waves generated from various parts of the aircraft, i.e. the nose and wing leading edge. The waves coalesce such that a distinct pressure signature is generated. The sudden increase in pressure results in the sonic boom. Investigations have found that sonic boom reduction can be achieved through specific aircraft design features such as swept wings, sharp leading edge nose, etc. However, aircraft design must not only account for sonic boom generation but must also provide good aerodynamic performance. The focus of this investigation is to increase the design capability for supersonic aircraft. Thus, this research has several objectives. The first objective will be to use Model Center to provide a multidisciplinary analysis of supersonic concepts. Second, importance will be given to the development of the design methodology. Finally, optimization and probabilistic analysis techniques will be applied to assess aerodynamic concepts.

Synthesis and Optical Spectroscopy of Cadmium Chalcognide Semiconductor Nanocrystals
Kwang Lee, Hampton University

High optical quality Cd chalcogenide (Te, Se, and S) quantum dots with average size at the near Bohr radius were synthesized using a colloidal chemical reaction. The Cd chalcogenide (Te, Se, and S) nanocrystals exhibited strong blue shift and discrete energy states which were significantly modified from bulk crystals. The discrete structure of energy states leads to a discrete absorption spectrum of quantum dots, which is in contrast to the continuous absorption spectrum of a bulk semiconductor. Atomic-like discrete energy states of exciton indicate a quantum confinement effect. Plotting with weak confinement equation, the energy spacing between the first and second peaks of exciton absorption is less than 50 meV for bigger than ~4 nm quantum dot size, and is more than several hundred meV for ~2 nm size of CdTe quantum dots. Because of the quantum-size effect, this ability to tune the nanocrystal size translates into a means of controlling various nanocrystal properties, such as emission and absorption wavelengths. For example, the emission of CdSe nanocrystals can be tuned from near-infrared to dark blue by a reduction in the dot radius from 8 to 0.8 nanometers. The quantum confinement modifies the optical absorption, the photoluminescence, the radiative decay time, and the nonlinear optical properties of nanocrystals. The optical properties of quantum dots are also sensitive to the dielectric effects of surrounding environment, which can be utilized for the detection of nanoscale crack on the spacecraft.

Developing an Autonomous On-Orbit Impedance-Based SHM System for Thermal Protection Systems
Benjamin Grisso, Virginia Tech

Thermal protection systems on spacecraft are crucial for the survival of the vehicle during Earth reentry. The complex nature of thermal protection systems and extreme reentry temperatures and do not allow for easy access to monitor the condition of the external surface of the spacecraft. An active sensing system is proposed to interrogate the exterior of the surface and provide automated damage detection, diagnostics, and prognosis. Impedance-based health monitoring techniques utilize small piezoceramic (PZT) patches attached to a structure as self-sensing actuators to both excite the structure with high-frequency excitations, and monitor any changes in structural mechanical impedance. By monitoring the electrical impedance of the PZT, assessments can be made about the integrity of the mechanical structure. Deployment of structural health monitoring systems for permanent damage detection is limited by the availability of sensor technology. The development of a DSP based prototype is the focus for initial efforts in realizing a fully self-contained active sensor system utilizing impedance-based SHM. The active sensing system interrogates a structure utilizing a self-sensing actuator and the low cost impedance method, and all the data processing, storage, and analysis is performed at the sensor location. A wireless transmitter is used to communicate the current status of the structure. Piezoelectric-based power harvesting allows the sensing system to be completely self-contained and autonomous. With this new low cost, field deployable impedance analyzer, reliance on traditional expensive, bulky, and power consuming impedance analyzers is no longer necessary. Experimental validation of the prototype is performed on a representative structure and compared to traditional methods of damage detection. The benefits of this new system are discussed, along with current research and the path forward to a complete stand alone SHM system.

Thermal and Flow Control for Airfoil-Endwall Junctures in Gas Turbine Engines
Stephen Lynch, Virginia Tech

In modern gas turbine engine designs, the combustion products entering the turbine are typically much hotter than the melting temperature of the metal components. Complex internal and external cooling schemes, usually involving the use of cooler compressor bleed air, are necessary to preserve the integrity of the turbine components. At the junction of the turbine airfoil and its casing (endwall), complex three-dimensional flows develop. These complex vortex flow patterns further increase heat transfer into the metal parts, and sweep coolant away from the endwall surface. They also contribute to reduced aerodynamic efficiency, which leads to penalties in available thrust or power production. Past studies have shown that the use of a large leading edge fillet at the airfoil-endwall junction can reduce or eliminate the endwall vortex flow pattern. Measurements of wall shear stress and heat transfer coefficients can provide information about the effects of a fillet on the endwall surface conditions. A shear stress measurement technique, known as oil film interferometry, was developed and implemented in a low-speed linear cascade with a scaled-up airfoil. High-resolution measurements of both magnitude and direction of shear stress provided quantitative information about the endwall flow features. Heat transfer coefficients were also measured with and without a leading edge fillet design. Results indicated that the leading edge fillet changed the distribution of heat transfer on the vane endwall, and reduced heat transfer in some locations. Comparisons of heat transfer coefficients and shear stress showed that the two were not related for the highly skewed, three dimensional boundary layer in the vane passage.


Aerospace Systems Concepts and Analysis
Dr. Jim McDaniel, University of Virginia - Session Chair

A Software-Defined Ultra Wideband Communication System Testbed
Christopher Anderson, Virginia Tech

Software Defined Radios (SDR) have the potential of changing the fundamental usage model of wireless communications devices, but the capabilities of these transceivers are often limited by the speed of the underlying ADCs, DSPs, and FPGAs. A SDR receiver provides tremendous flexibility and rapid prototyping capabilities over a fixed hardware implementation. Such a receiver has the capability of supporting multiple data rates, modulation or multiple access schemes, can adapt to the propagation environment, and is capable of operating with a variety of waveforms and communication standards. Currently, state-of-the-art Impulse Ultra Wideband (UWB) communication systems are composed of custom-developed hardware, and do not use SDR architectures. Several major challenges are involved in developing such a communication system—such systems require extremely high sampling rates, generate huge amounts of sample data, and require a tremendous amount of digital processing power. These challenges are particularly daunting when Commercially available Off-The-Shelf (COTS) components are used in the development of such a system. In this paper, we investigate the development of a UWB SDR Transceiver Testbed based around an 8 GHz-8 ADC Time Interleaved Sampling array. The overall objective is to demonstrate a testbed that will allow researchers to evaluate different UWB modulation, multiple access, and coding schemes, and will support raw data rates of up to 100 MB/s.

Planar Iodine LIF Velocity Measurements of Rarefied Hypersonic Flow Over A Tether
D. Eric Cecil, University of Virginia

An investigation is presented of Mach 12 flow over a wind tunnel model of a tether connecting an aerobraking spacecraft to an inflatable drag device. The study was undertaken to support numerical simulations employed in the design of a “ballute” (balloon-parachute) aerobraking device currently being developed by Ball Aerospace and NASA. The attachment point of the tether to the ballute is subject to extreme heating conditions due to flow impingement and wind tunnel measurements were desired to evaluate the numerical predictions. The wind tunnel model of the tetherballute junction consisted of a flat plate with a sharp leading edge from which a circular rod extended. Measurements were conducted in a “cold” free jet wind tunnel facility, using nitrogen gas expanded from ambient temperature. Detailed velocity measurements were made from the Doppler shift of the iodine B-X absorption spectra produced by laser-induced fluorescence (LIF) of a trace amount of iodine seeded in the gas. A planar grid of local velocity with a resolution on the order of one mean-free-path in the flow free stream was obtained using a wide sheet-beam from a frequency-tunable multiwatt argon laser to excite LIF and a low-noise liquid-nitrogen-cooled CCD camera to record multiple images.

An Application of Quaternion Algebra: Inertial Navigation and Guidance
George Davailus, Old Dominion University

Currently, many six degree of freedom (6-DOF) trajectory simulations and simulations of gyroscopic motion use quaternions to define a vehicle’s orientation. Of those that do, however, none take full advantage of the properties of quaternion algebra. Quaternions are also known as hypercomplex numbers. They can be treated as individual quantities for which all the standard algebraic operations are defined. Consequently, they have advantages that Euler angles and transformation matrices do not. This paper will describe the use of quaternion algebra and elliptic functions to obtain a closed form solution for torque free gyroscopic motion in terms of the rotational quaternion and its derivative. It will also define an alternative 6-DOF formulation, that, when combined with quaternion algebra, is potentially much more powerful than current simulations.

Innovations in Peer-to-Peer Communications
David Bryan, College of William and Mary

In the last 2 years, Peer-to-Peer (P2P) communications has gone from being a completely new topic to a popular mechanism for personal communications. We proposed the first standards-based P2P communications system, SOSIMPLE, and have been focused on addressing a number of issues for such systems. In particular, we have been focused on issues that have limited implementation and deployment of these systems, including security, distributed storage of offline messages, protocol standardization, NAT traversal, and defining P2P routing mechanisms optimized for communications, rather than file sharing. In this paper, we outline some of these issues and discuss possible solutions to address them.

Water Vapor Concentration in a Model Scramjet Combustor by Infrared Laser Absorption Tomography
Elliott Bryner, University of Virginia

Infrared absorption spectroscopy has become an important tool for researchers for the determination of gas dynamic properties such as temperature and species concentration in high speed and high enthalpy flows. The wide availability of room temperature infrared diode lasers makes infrared laser spectroscopy particularly attractive and this technique has been used in many research and industrial applications1. As such considerable research has been performed in the determination of spectroscopic parameters such as strength and position of absorption features for many species such as water vapor2. However some spectroscopic parameters, such as collisional broadening and their temperature dependence are not well characterized, this is particularly true at high temperatures and low pressures. One application of infrared laser absorption spectroscopy is measuring species concentration in combustion flows in aeroengines such as gas turbines, ramjets and scramjets. I am developing an instrument to measure water vapor concentration in a scramjet combustor to be used in the determination of combustion efficiency3. Conditions in the scramjet combustor are such that to determine the properties of interest, temperature and concentration, accurate line shape parameters must be determined. To determine these parameters and quantify their temperature dependence, measurements were made of known concentrations of water vapor at several temperatures and pressures. Measurements were made for water vapor in a pair of absorption cells at low pressure to quantify the self-broadening and air-broadening coefficients and their temperature dependency.


Remote Sensing
Dr. Chris Goyne, University of Virginia - Session Chair

Spatial Modeling of Mosquito Habitat in a Fragmented Landscape on the Southern Coastal Plain of Virginia
A. Scott Bellows, Old Dominion University

My primary objective was to create a set of predictive, spatially explicit classification models capable of ranking suitable mosquito habitat in heterogeneous landscapes with an emphasis on known vectors of West Nile Virus and eastern equine encephalitis. Life-history processes dictate organismal distributions, and a function of spatial land cover patterns (i.e., land cover composition and configuration). Thus, I hypothesized that local mosquito abundance could be predicted from a parsimonious set of measurable environmental factors represented as thematic layers in a GIS. Layers represented soil (e.g., runoff potential, water capacity) and vegetation (i.e., Tasseled Cap transformations [TC]: wetness, greenness, brightness) characteristics. Soil moisture is closely associated with mosquito life histories and spatial composition and configuration of vegetation are closely linked with species diversity and abundance. TC has been useful for inferring photosynthetic activity and vegetative structure. All thematic layers were derived via direct interpretation of Landsat ETM+ imagery. Analysis will include a comparison of the ability of linear regression models (parametric) and artificial neural networks (ANN: non parametric) to accurately predict habitat suitability and subsequent mosquito abundance. These HSIs were derived from spatially explicit empirical data, thus making them useful within real landscapes. The use of data collected by remote-sensing techniques will enable the integration and analysis, within the framework of a GIS, of large and diverse data sets to an extent not feasible by field collection alone.

Carbon Monoxide and Sea Surface Retrieval Algorithm for MicroMAPS Missions
Patrick E. Hopkins, University of Virginia

The scientific goal of the Micro Measurement of Air Pollution from Satellites (?MAPS) project is to measure carbon monoxide (CO) mixing ratios in the middle troposphere from an airborne platform. Recent work has focused the development of a data processing algorithm to determine precise scientific total column amounts of carbon monoxide from ?MAPS flights onboard Proteus in 2004. In this paper, the development of the current data reduction procedure for CO retrieval is presented. The ?MAPS calibration data and the various radiative transfer models, both vital steps in CO retrieval, are discussed. The retrieval algorithm is applied to the July 22, 2004 flight (which occurred during the INTEX-NA campaign) over the Atlantic Ocean off the coast of North America. Sea surface temperature and preliminary CO total column and mixing ratio calculations from this flight are presented and compared with retrievals from NAST-I and AIRS . other CO-measuring instruments.

An Overview of Hampton University’s 48-inch LIDAR System
Sydney Paul, Hampton University

In 2004 Hampton University was the benefactor, via governmental surplus, of a world-class lidar system that is built around a 48-inch diameter-receiving telescope. Lidar, is an acronym for Light Detection And Ranging, which is the optical analog of microwave Radar. The telescope for the lidar system has been positioned in HU’s Observatory so that data can be taken at the zenith, viewing the sky through the movable dome roof. The proposed research will develop a new capable lidar at HU for investigating novel laser remote sensing techniques and devices to strengthen our remote sensing program. This paper outlines HU’s 48-inch lidar system, and the expected measurements it will make. The 48-inch lidar with its enormous collecting area will provide HU the capability to investigate distant targets as well as phenomena with very small backscattering cross-sections. The 48-inch lidar system will be used not only to make important atmospheric measurements, but also as a test bed for developing new measurement capabilities or techniques. The latter will allow the development of small, dedicated lidars for various DOD applications, like the measurement of toxic or lethal gases. Lidars have provided important measurement roles in chemical and biological detection and identification due to their specificity, accurate range and real-time remote sensing detection capabilities. Although the complete telescope, most of the steering optics, and supporting structure is in excellent condition, there was no data acquisition system or detector system included. In addition, the ruby and Nd:YAG lasers provided with the 48-inch lidar need extensive refurbishment and do not have the frequency stability, output stability, or narrow wavelength output required for current applications. Therefore, a new injection seeded four-wavelength Nd:YAG laser, detectors, and a new data acquisition system was recently funded and purchased through a successful BAA proposal. Presently, they are being installed and tested by HU students and faculty. Once the 48-inch lidar system is fully operational at HU it will be part of the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) Quid Pro Quo Validation program. The program will be important in validating the calibration and algorithms for the CALIPSO data. Data will be taken while CALIPSO overpasses, and comparisons made. CALIPSO is set to launch this year (2006). CALIPSO will be part of the “A-Train” constellation of Earth Observing satellites and will provide cloud and aerosol profiles, data important for climate research and the prediction of future climates. It will provide, from space, the first global survey of the vertical profile of aerosol and cloud and physical properties. CALIPSO is designed to determine the height of aerosols, and clouds, the source of aerosols, and the presence of sub visible clouds. CALIPSO is a lidar and, therefore, will carry its own laser light source. It has become a powerful technique for measuring atmospheric constituents like aerosols, clouds, and gases. CALIPSO will make elastic backscatter measurements in three channels; 1064 nm and 532 nm wavelengths, and a depolarization channel at 532 nm. The depolarization measurements of CALIPSO will help determine if clouds contain liquid droplets or ice crystals. This paper will describe both the HU 48-inch lidar and the CALIPSO satellite experiment, and the expected validation using the HU 48-inch lidar.


Structures and Materials
Dr. Chris Goyne, University of Virginia - Session Chair

Control of a Space Rigidizable-Inflatable Boom Using Macro-Fiber Composite Actuators
Pablo A. Tarazaga, Virginia Tech

An experimental investigation consisting of vibration testing and control of a thin film rigidizable inflatable boom was conducted. Inflatable space structures posses many advantages that make them of great interest for the space industry. These systems have advantages such as less mass, higher packaging efficiency, lower life cycle cost, simpler design with fewer parts, and higher deployment reliability. Testing and controlling these structures poses a problem due to their lightweight and flexible characteristics. In this study we are able to show the advantages of fully embedded Macro-Fiber Composites (MFC) in these structures. These embedded MFCs provide the ability to perform modal testing for parameter identification as well as for vibration suppression. In addition, the concept of positive position feed back is extended to include acceleration and velocity feedback, and the stability of these controllers is investigated

3D Computer Simulations with Applications to Nondestructive Evaluation
Kevin Rudd, College of William and Mary

This presentation will briefly outline several new 3D parallel computer simulation methods. The first method simulates 3D acoustic wave interactions with material layers and objects. This simulation method is being employed to study how sound waves scatter from hidden weapons and explosives to assist in the development of an acoustic concealed weapons detector. The second method simulates elastic wave propagation in solids. It is currently being used to model ultrasound waves in complex pipe geometries. These new simulation methods take advantage of William and Mary's High Performance Computational Cluster (The SciClone).

Wellposedness of Nonlinear Elastic Plate Vibrations with Restorative Force
Inger Daniels, University of Virginia

The project pertains to the mathematical analysis of solutions to nonlinear systems of partial direrential equations modeling structural acoustic interactions. In such a context, the presentation focuses on a partial direrential equation modeling nonlinear vibrations of an elastic plate. Global existence and uniqueness of nonlinear solutions are demonstrated. The presentation highlights the effects of a particular restorative force acting on the plate, the absence of which may cause the solution to blow up.

A Biomimetic Micro-Cellular Material for Shape Morphing Applications
Scott Kasen, University of Virginia

Active Cellular Materials (ACMs), a new class of biomimetic composites, typically have a relative density comparable to other cellular solids, on the order of 30% or less, and incorporate one or more smart materials (i.e. shape memory alloy, PZT, electrorestrictive polymer, etc.) or actuators into a cellular metallic, semi-metallic, polymeric, or ceramic structure. This paper presents a finite element model of a new ACM morphology with hexagonal repeating cells on a sub-millimeter size scale. The current design is directed towards shape morphing (for adaptive airframe applications or deployable aerospace structures) and self-healing (for impact energy dissipation) capabilities. The micro-ACM concept offers superior structural efficiency when compared to existing, morphing structures by rendering the shape-changing ability intrinsic to the material. ACMs hold promise as an advanced aeronautic technology for improving the performance, reliability, and safety of today’s aerostructures.


Applied Physics
Dr. Bruce Lowekamp, College of William and Mary - Session Chair

Material Preparation and Infrared Spectroscopy of Cr2+ Doped II-VI Semiconductor Windows and Crystals for Mid-Infrared Laser Applications
Ivy Jones, Hampton University

The material preparation and infrared spectroscopy of Cr diffusion doped zinc and cadmium chalcogenides including ZnSe, CdTe, Cd0.96Zn0.04Te, Cd0.90Zn0.10Te, Cd0.80Zn0.20Te, and ZnTe are reported. The materials were prepared by a thermal diffusion process controlled by temperature (750-850°C) and time (0.25-6 days). Cr2+ doped II-VI semiconductors continue to be of significant interest as gain media in mid-infrared (2-3 µm) solid-state lasers. Commercial CrSe powder of 99.5 % purity was used as the dopant source. Various samples of Cr:ZnSe and Cr:CdTe were prepared with Cr2+ peak absorption coefficients ranging from ~ 0.8 cm-1 to 28.7 cm-1. The Cr2+ room-temperature decay time varied between 5-6 µs for Cr:ZnSe and 2-3 µs for Cr:CdTe. Photoluminescence studies revealed the effect of dopant concentration quenching, on the mid-infrared emission for doping concentrations above ~1x1019cm-3. By increasing the Zn composition within the CdXZn1-XTe series, a wavelength shift of the Cr2+ absorption to shorter wavelengths was observed.

Passenger Effects on Electromagnetic Propagation Prediction Inside Boeing Aircraft
Mennatoallah Youssef, Old Dominion University

The focus of this effort is to analyze passenger effects on electromagnetic propagation prediction inside airplane fuselages. Previous models included internal component considerations. Passengers are expected to alter power propagation due to their material composition. The passengers are assigned a material composition of water. Results from the passenger models is currently being examined are expected by the end of the semester. More detailed models are expected to follow.

The G0 Experiment: Parity Violation in e-N Scattering
Stephanie Bailey, The College of William and Mary

The goal of the G0 experiment, currently underway at the Thomas Jefferson National Accelerator Facility, is to investigate the contributions of the strange quarks to the fundamental properties of the nucleon. The experiment uses a polarized electron beam and unpolarized hydrogen and deuterium liquid targets. The experiment will measure parity-violating asymmetries, over a momentum transfer range of 0.1 - 1.0 (GeV/c)2, in elastic electron scattering of hydrogen at both forward and backward angles, and quasielastic electron scattering of deuterium at backward angles. From the measurements, one can extract the vector neutral weak form factors, Z E G and Z M G , and the effective axial current of the nucleon, e A G . These form factors, along with the electromagnetic form factors, will yield the contribution of the strange quark to the proton's charge and magnetization distributions. The forward angle phase was completed in 2004 and the backward angle phase begins in March 2006.

Enhancing Model Understanding Through Static Analysis
Kara A. Olson, Old Dominion University

Simulation is used throughout the sciences for many purposes. While in many cases the model output is of primary interest, often it is the insight gained into the behavior of the simulated system that is the primary benefit. This insight can come from both building the model and observing its behavior through animations, execution traces, or statistical analysis of simulation output. However, much that could be of interest to a modeler may not be easily discernible through these traditional approaches, particularly as models become more complex. Using static code analysis techniques can reveal aspects of models not readily apparent to the builders or users of the models, even when applied to relatively modest simulation models. Using a commercially available static code analysis tool, we were able to find documentation errors in the published paper, “Redundancy in Model Specifications,” by Nance, Overstreet and Page[1]. This additional information about model properties is unlikely to be detected in executing the models and contributes to the insights gained by modeling a complex system.

Monitoring Available Bandwidth of Underlying Grid Networks to Improve Application Performance
Marcia Zangrilli, College of William and Mary

Grids are becoming increasingly important to research and engineering institutions because they provide a secure way of coupling various geographically distributed resources into a single high- performance environment. Because the performance of distributed systems is intrinsically linked to the performance of the network, applications that have knowledge of the available bandwidth can adapt to changing network conditions and optimize their performance. While several algorithms have been created to actively measure the end-to-end available bandwidth of a network path, they require instrumentation at both ends of the path, and the tra_c injected by these algorithms may affect the performance of other applications on the path. As part of the Wren monitoring system, we are developing techniques that use passive traces of existing tra c instead of actively probing the path to measure the available bandwidth. Our technique is designed to use either two-sided or one-sided packet traces, which gives the user flexibility in how our tool is deployed. We have completed a packet trace facility and designed new passive bandwidth algorithms. Our results evaluate the effectiveness of these new algorithms in diverse environments and demonstrate how applications can transparently integrate with the Wren monitoring system to optimize their performance.


Applied Physics
Dr. Leposava Vuskovic, Old Dominion University - Session Chair

Cognitive Radio Applications to Dynamic Spectrum Allocation - paper
Cognitive Radio Applications to Dynamic Spectrum Allocation - presentation
David Maldonado-Febus, Virginia Tech

The vision of cognitive radio has generated a lot of discussion and increasing attention from the wireless communications community in recent years. With the ability to learn from and adapt to both their surrounding environment and user needs, cognitive radios offer a great number of benefits in almost all markets of interest: military, government, public safety, and commercial. As a result of our work at Virginia Tech’s Center for Wireless Telecommunications, we have designed and are in the process of implementing a generalized cognitive engine/radio applicable to all these areas. In this paper, we will provide an overview of cognitive radio technology and describe the application areas where we see immediate benefits as well as our cognitive engine approach. We will then provide simulation results illustrating one of the most obvious and immediately beneficial cognitive radio applications: dynamic spectrum sharing. Although there are many techniques used to share spectrum and improve capacity in wireless channels, we will experimentally demonstrate that by sensing the environment and making real-time decisions on frequency, bandwidth, and waveform, we can achieve an increase of 20 dB signal to interference and noise ratio (SINR) in wireless LANs over the standard techniques. This improvement is achieved using a very early-stage cognitive radio and indicates how much better performance a fully-developed solution can obtain.

Cognitive Radio Testbed
Lizdabel Morales-Tirado, Virginia Tech

The Federal Communication Commission (FCC) has had the task of allocating the scarce available spectrum for a plethora of new and expanding services. In order to provide room for continued growth and innovation, the FCC is looking for techniques that allow for more efficient use of the spectrum. The principal technique under consideration is cognitive radio. A cognitive radio is a software radio that is aware of its environment and its capabilities; it can alter its physical layer behavior, and is capable of following complex adaptation strategies. The use of cognitive radio technology has the potential to vastly improve the efficiency of spectrum usage. Our team has developed a Cognitive Radio Test-bed using Tektronix off-the shelf components, which will help us test, develop and validate the use of cognitive radio as a proven technology for efficient spectrum utilization. The design builds on Virginia Tech’s open source SCA implementation and test equipment software components. At this time, the focus of the cognition abilities of the radio comprises the identification of particular frequency bands suitable for transmission and the cognitive radio makes adjustments to the center frequency, modulation and transmitted power according to the spectrum policy delineated by the user and the performance goal (i.e. QoS). The cognitive engine is implemented using a neural network. The cognitive engine adapts the waveform parameters according to the results obtained after demodulating the signal, in order to achieve a desired performance goal. The test-bed developed in this research is the first of its kind. It provides a solution for cognitive radio testing and validation; and shows how a cognitive radio etiquette can be developed and tested using off-the-shelf equipment and open-source software.

Boundary Intergral Equations in Non-Smooth Domains
Katharine Ott, University of Virginia

We address the issue of the sharpness of the well-posedness results for Lp transmission boundary value problems. The approach relies on Mellin transform techniques for singular integrals naturally associated with the transmission problems and on a careful analysis of the Lp spectra of such singular integrals.

Imaging X-Ray Photoelectron Spectroscopy
Elizabeth Cantando, University of Virginia

X-Ray photoelectron spectroscopy (XPS) provides surface sensitive chemical analysis of materials in vacuum, enabling the determination of atomic concentration and chemical bonding within the first 50 nanometers of the sample surface. UHV techniques necessary for quantification and the theory of operation will be described. I will present progress on the design of an imaging XPS system with anticipated lateral resolution of 50 microns and 0.5 eV energy resolution. The case is also made for a low-power XPS instrument for in situ chemical analysis of extraterrestrial geologies.

Failure Pressure of Bilayer Lipid Membranes
David Hopkinson, Virginia Tech

The motion and growth of plants is the inspiration for a new biomimetic actuator that uses fluid transport across a bilayer lipid membrane (BLM) to create internal pressure and cause displacement in the actuator. In order for the actuator to be viable the BLM must be able to withstand this internal pressure without failing. In this study BLMs are formed over a porous polycarbonate substrate and a hydrostatic pressure is applied to the BLM and gradually increased until it fails. This test is performed over different pore sizes to measure the failure pressure of the BLM as a function of pore radius. A similar test is used for polymer films to compare the failure pressure trends of a BLM to conventional engineering materials. The polymer films and BLMs are modeled as a simply supported circular plate under uniform load, first with the assumption of small deflections and then with the assumption of large deflections. It was found that the large deflection model better represents the trend of failure pressure versus pore radius than the small deflection model.


Astrophysics/Planetary Science
Dr. Chris Hall, Virginia Tech- Session Chair

The Connection Between Low Mass X-ray Binaries and Globular Clusters
Gregory Sivakoff, University of Virginia

Massive galaxies contain globular clusters (GCs: dense, spherical concentrations of millions of stars), which can be studied in the nearby universe using NASA’s Hubble Space Telescope (HST). Increased stellar interactions, arising from the dense environs of GCs, are thought to efficiently form low-mass X-ray binaries (LMXBs), binary stars with one normal star having M.MSun and either a neutron star or black hole. At the distance of most nearby galaxies, the stars in an LMXB cannot be detected optically; however, LMXBs emit profuse amounts of X-rays, allowing their detection with NASA’s Chandra X-ray Observatory (CXO). We explore the optical properties of GCs containing LMXBs by combining data from CXO and HST for eleven massive early-type galaxies. Globular clusters that are more massive, smaller in extent, and contain more heavy elements are found to be more likely to contain LMXBs. These results are part of a larger e_ort probing the formation and evolution of LMXBs, GCs, and, ultimately their host galaxies.

Microfossils Preserved in Highly Metamorphosed Archean Rocks Testing the Plausibility of a New Taphonomic Window onto the Early Biosphere
James Schiffbauer, Virginia Tech

Archean siliciclastic rocks have typically not been a target of paleontological research because the general consensus regards the high metamorphism as detrimental to fossil preservation. Many Archean rocks have been metamorphosed beyond the greenschist grade, and their potential for microfossil preservation has not been extensively tested, although isotopic biomarkers in these rocks have been explored and heavily debated. Thus, it follows that no convincing body fossils have been recovered from Archean siliciclastic rocks. However, via standard palynological maceration techniques, we have recovered circular graphite discs, among abundant irregular graphite particles, from amphibolite grade metamorphic rocks of the Archean Wutai Complex (Jingangku Formation) in North China. The carbonaceous material of the Jingangku samples is mostly graphite particles, which have been verified as indigenous and graphitic via thin section petrographic observations, scanning electron microscopy, electron and Raman microprobe analyses, and elemental mapping. Application of the graphite Raman spectrum geothermometer to the consistent Raman spectra of extracted specimens suggests that these graphite discs experienced peak metamorphic temperature broadly consistent with that of the host rock. As observed through scanning electron microscopy, these discs bear morphological features related to graphitization, but they are also characterized by features that cannot be accounted for by graphitization alone, such as circular morphology, distinct marginal concentric folds, surficial wrinkles, and complex nanostructures similar to those found on a Mesoproterozoic acritarch, Dictyosphaera delicata. Transmission electron microscopy additionally illustrates that some disks appear to have two distinct layers compressed against each other. We interpret these Archean graphitic disks as compressed and graphitized vesicles similar to acritarchs that are abundant in Proterozoic shales. If this interpretation is correct, then the ultimate carbon source for these graphitized discs should be biological as well. Ion microprobe analysis of delta 13C values are consistent evidence, although not conclusive by their own, for a biological carbon source. This study may open a new window onto the Archean biosphere.

Probing Dark Matter in the Milky Way Using Halo Satellite Orbits
Jeffrey L. Carlin, University of Virginia

The determination of the absolute space motions of objects (dwarf galaxies and star clusters) orbiting in the Milky Way gravitational potential requires a dataset spanning a sufficient time baseline to see motions of individual stars in these distant satellites. Measurements of the orbits of these Galactic halo tracers can be used to model the mass distribution of the Milky Way out to large distances. To achieve the required time span, we must rely on photographic plate data taken prior to the advent of digital detectors. Extraction and analysis of these data has traditionally been painstaking work. We are developing an innovative technique that exploits the speed and stability of modern digital scanner technology to digitize such plates, providing a 20-30 times more efficient means of extracting photographic data than previously possible. By combining our unique set of archived photographic plates with Hubble Space Telescope data, we obtain a 50-year baseline for determination of proper motions (apparent transverse motions relative to the “fixed” background of stars and galaxies), which when combined with the radial velocity component yield the full orbital motions necessary to explore the large-scale mass distribution of our Galaxy.

Tracing the Dynamics of the Galactic Disk
Peter Frinchaboy, University of Virginia

Establishing the rotation curve of the Milky Way on an absolute scale is one of the fundamental contributions needed to understand the Galaxy and its mass distribution. As preparatory work for the \Taking Measure of the Milky Way" SIM Key Project, we have undertaken a systematic spectroscopic survey of open star clusters which can serve as tracers of Galactic disk dynamics. We report progress on our initial sample of over 100 clusters for which the Hydra multi_ber spectrographs on the WIYN and Blanco telescopes have delivered _ 1 km s_1 radial velocities (RVs) of many dozens of stars per cluster. The RVs are used to derive cluster membership for individual stars in these crowded _elds and to derive a bulk cluster RV. The clusters selected for study have a broad spatial distribution in order to be sensitive to the disk velocity _eld in all Galactic quadrants and across a Galactocentric radius range as much as 2.5 kpc from the solar circle. These clusters already have published ages, distances, and metallicity estimates, but these can be improved once chemical abundances on a uniform scale are measured from the homogenous spectra, and once SIM parallaxes are obtained for member stars. The new RVs combined with Tycho proper motions (for bright members in each cluster) allow an initial investigation of the local disk dynamics, but this will be substantially improved once SIM proper motions are obtained for these and even more distant open clusters.

Radiation Effects on Saturn's Icy Moon Enceladus
Mark Loeffler, University of Virginia

We will present results from laboratory studies on the radiation effects on ammonia– water mixtures pertaining to the environment of Saturn’s icy moon Enceladus. We show that ion irradiation destroys ammonia efficiently, and produces N2 that is the likely source and N+ that has been detected in the exosphere by Cassini INMS and CAPS instruments. Warming the irradiated mixtures we observe outbursts of water and ice grains at temperatures much lower than those needed for sublimation of water ice. Theseradiation/warming induced processes could explain the plume of water vapor and grains observed by Cassini on the south polar region of Enceladus.

Remote Pulsed Laser Raman Spectroscopy System for Detecting Water, Ice, and Hydrous Minerals on Planetary Surfaces
Christopher S. Garcia, Old Dominion University

For exploration of planetary surfaces, detection of water and ice is of great interest in supporting existence of life on other planets. Therefore, a remote Raman spectroscopy system was demonstrated at NASA Langley Research Center for detecting ice-water and hydrous minerals on planetary surfaces. In this study, a 532 nm pulsed laser is utilized as an excitation source to allow detection in high background radiation conditions. The Raman scattered signal is collected by a 4-inch telescope positioned in front of a spectrograph. The Raman spectrum is analyzed using a spectrograph equipped with a holographic super notch filter to eliminate Rayleigh scattering, and a holographic transmission grating that simultaneously disperses two spectral tracks onto the detector for higher spectral resolution. To view the spectrum, the spectrograph is coupled to an intensified charge-coupled device (ICCD), which allows detection of very weak Stokes line. The ICCD is operated in gated mode to further suppress effects from background radiation and long-lived fluorescence. The sample is placed at 5.6 m from the telescope, and the laser is mounted on the telescope in a coaxial geometry to achieve maximum performance. The system was calibrated using the spectral lines of a Neon lamp source. To evaluate the system, Raman standard samples such as benzene, cyclohexene and calcite were analyzed. The Raman evaluation technique is used to analyze water, ice and other hydrous minerals, and results from these species will be presented.

Tracking the Magellanic Stream(s)
David L. Nidever, University of Virginia

We explore the nature of the Magellanic Stream with the new Leiden- Argentine- Bonn (LAB) all-sky HI survey (Kalbera et al. 2005). We decompose the HI pro_les into Gaussians using an automated Gaussian analysis program similar to that by Haud (2000). We _nd that the Magellanic Stream is composed of two laments, as _rst pointed out by Putman et al. (2003), distinguishable along the entire length of the Stream visible in our dataset. One of the _laments originates in the 30 Dor region in the Large Magellanic Cloud (LMC). From the spatial and velocity variations of this _lament the drift rate of the Magellanic Stream gas is estimated to be 30 km/s, which gives an age of 3 Gyr for the Magellanic Stream. We also analyze the Leading Arm and _nd that many of its features are continuous and also connect to the 30 Dor region in the LMC. This is further evidence that tidal forces contributed to the creation of the Magellanic Stream.


Virginia Space Grant Consortium
Student Research Conference - April 11, 2006
Omni Hotel, Newport News, Virginia

Aerospace Undergraduate Research Scholars
Poster Presentations – Ballroom

The Effect of C-Wings on the Aerodynamics and Longitudinal Stability of an Aircraft
Stephanie Bartley, Virginia Tech

The purpose of this research was to determine how effective a C-wing is in serving as a substitute to the horizontal stabilizer in creating pitch stability while maintaining or improving the aerodynamic effectiveness of a simple winglet. The C-wing has a similar effect as simple winglets in that it also reduces induced drag. By sweeping the wing and winglets, the need for a horizontal stabilizer may be reduced because the lift force generated on the secondary winglet can be used to counteract the nose-up pitching moment of the wing since this surface is located behind the center of gravity of the aircraft; this will, in turn, create a nose-down pitching moment about the center of gravity. The wing-tail configuration of the Cirrus SR-22 aircraft was used as a baseline comparison for the C-wing created for this research. Using vortex lattice methods, the main wing for the C-wing was designed such that the wing area and lift coefficient were the same as that of the SR-22 main wing. Then, using wind tunnel tests, the incidence angles of the vertical winglets and secondary winglets were varied such that the maximum lift possible was generated by the C-wing. Then, the pitching moment characteristics and drag forces generated by this ‘optimum’ C-wing configuration were examined and analyzed in order to determine the feasibility for using C-wing as a substitute to the horizontal stabilizer in creating pitch stability.

Implementation of a Three Axis Magnetometer onto an Attitude Spacecraft Simulator
Zarrin Chua, Virginia Tech

The spherical air-bearing simulators in Virginia Tech’s Space Systems Simulation Laboratory (SSSL) include a variety of sensors and actuators that are similar to the devices used on actual spacecraft. One commonly used spacecraft attitude determination sensor is the three-axis magnetometer. Its effective use requires an accurate model of the surrounding magnetic field. This paper presents a characterization of the magnetic field in the SSSL and describes the integration of the HMR2300 Smart Digital Magnetometer on the existing attitude simulators.
Attitude is determined by acquiring body-frame measurements of two non-parallel vectors that are known in an inertial frame. These body-frame measurements are obtained from sensors, such as magnetometers or accelerometers. The attitude simulators in the SSSL currently use a three-axis accelerometer to measure the direction of the gravity vector. The magnetometer measures the strength and direction of the magnetic field in a specific location on one of the simulators. The known consistent magnetic field in the SSSL provides a nearly inertial reference, which is essentially the Earth’s known magnetic field. The orientation of the magnetometer is measured within this known field. The magnetometer relays the magnetic field vector components to the flight computer. Attitude is determined using the Triad algorithm. The Triad algorithm uses two sensor position vectors defined in both the body and inertial frames to determine the attitude of the body relative to the inertial frame.

Adhesive Properties of Trisilanolphenyl-POSS
Sarah M. Huffer, Virginia

Polyhedral oligomeric silsesquioxanes (POSS) have been an innovative area of research for the past twenty years. Their unique properties allow for use in aerospace applications as space-survivable coatings and insulation. Recent studies showed that trisilanol-POSS derivatives form self-assembled monolayers at the air/water (A/W) interface. The purpose of this study was to improve adhesion between ceramics and metals (Configuration 1) and metals and polymers (Configuration 2) by preparing multiplayer films at various pH values and metal ion concentrations using trisilanolphenyl-POSS (TPP). These multilayer systems were prepared by spincoating to make the polymer layer of polystyrene, the Langmuir-Blodgett (LB) technique to create the TPP layer, and physical vapor deposition (PVD) to produce the aluminum layer. The resulting films were characterized for quality and stability using atomic force microscopy (AFM), optical microscopy (OM), X-ray photoelectric spectroscopy (XPS), and dewetting experiments. Initial experiments on Configuration 1 demonstrated that TPP-aluminum ion complexes created a smooth aluminum film on silica while TPP alone caused a blistered aluminum surface. Dewetting experiments on Configuration 2 showed that the polystyrene layer completely dewet on TPP, but the TPP-aluminum ion complexes suppressed dewetting.

Mechanical Properties of a Photocurable Polymer Formulation
Trevor Kemp, University of Virginia

Space craft structural members could potentially be fabricated in space using photocurable polymers, with the potential benefits of lower energy use, reduced risk of life from fewer manned missions, and smaller launching costs. In this research project, experimental material testing has been carried out to evaluate the potential for a multifunctional thiol-ene polymer to be used as a structural material. The activities of the project are described and include polymer selection, mold making, sample creation, and tensile testing. The relative composition of photoinitiator to thiol-ene was varied, along with exposure time in order to find a composition for ideal stiffness. Samples of the thiol-ene polymer conforming to ASTM standard D 638 were tested with a tensile tester and the results are presented. The results indicate there is potential for this type of material to be used structurally, but that further research should be conducted.

Development of a Freestream Particle Seeder for PIV Measurements of Supersonic Combustion
Joshua D. King, University of Virginia

Scramjets, a specialized jet engine capable of operating at hypersonic speeds, show enormous potential [1]. A scramjet-powered civilian transport could take passengers anywhere in the world in under three hours [2]. By eliminating the need for onboard oxidizers, scramjet powered transatmospheric vehicles would lower the costs of reaching space, possibly replacing traditional rockets in that role [5 Though there have been two successful demonstrations of scramjet-powered-aircraft with X-43’s, much work remains to make scramjets practical [2, 3, 7]. The Aerospace Research Laboratory works towards this goal by collecting data from wind tunnel tests to improve modern computational fluid dynamics programs. Testing is carried out with the Supersonic Combustion Facility, a supersonic wind tunnel whose test section is a scramjet combustor [6, 7]. Velocity fields inside the Facility are created using Particle Image Velocimetry (PIV), a process that involves introducing, or seeding, particles into flows and examining their motion [5, 7]. Data from PIV testing is shown in Figure 1. There were problems and limitations with the experimental technique, however. First, velocities could only be determined for particle-seeded regions, limiting the quantity of data that was gathered. Previously the Aerospace Research Laboratory had only seeded hydrogen fuel and studied combustion processes. Second, by not seeding the entire flow, problems arose with the velocities generated by PIV computer software [4]. This paper focuses on my efforts to correct these problems, which involve designing and building a freestream particle seeder to place particles throughout the entire wind tunnel. Running off compressed air, the seeder uses a particle fluidizer (Figure 2) and injects the particles into the wind tunnel prior to the supersonic nozzle. Partial system testing has also been carried out, and the results have been presented. Finally, the paper notes opportunities for future research work related to the seeder system.

Protocols for Cognitive Radios: Connection Setup and Maintenance Techniques
Larissa Marple, Virginia Tech

Cognitive radios, effectively intelligent software defined radios, are designed to search for an unspecified frequency over which to communicate. By utilizing tradeoffs normally controlled by the user, cognitive radios can operate more efficiently by balancing the various “meters and knobs” that measure and control signal performance. One of the goals of such “thinking” radios is to be able to use the licensed spectrum in the TV band in a way that doesn’t interfere with the licensed users The protocols concerned with connecting multiple radios on an unspecified frequency create obstacles as to how the radios find each other, and how to make tradeoffs dealing with connection delays and the security of the transmission. Further complications arise if the licensed user interrupts the radios by broadcasting on the specified frequency without notice, forcing the radios to disconnect abruptly. The radios must then have a predetermined method of finding each other on one of the other available frequencies in the band. Several methods are being considered to solve the problem of connection and reconnection. In one case, each radio would narrow down the list of potential frequencies to a list of available frequencies, and then each radio would use an algorithm to jump around in frequency, searching for the other radio. A second option is to have the radios configured to a master-slave relationship where upon disconnection, the master radio would find an available frequency and broadcast a signal for the slave radio to find through checking the available frequencies Each solution has its own advantages and disadvantages. Assuming the algorithms are sufficiently shielded, the frequency hopping method would be relatively secure since it could not be inhibited by a frequency jammer and the frequency of communication could not be easily predicted beforehand. Of course, the time delay between disconnection and reconnection might be noticeable or even disruptive to the user, which is extremely undesirable. The master-slave method should be very time efficient since the radio only does a linear sweep of the available frequencies, rather than having to use probabilities to align the radios. The potential problem occurs during the initial connection, when the radios have not had a chance to decide which radio is the slave, and which is the master Cognitive radio technology has the potential to change spectrum usage as it exists today by letting radio communication coexist with licensed users. There are many challenges to overcome with this new technology, but the overall goal is to ascertain that the radios do not cause interference to the licensed users in the spectrum, while assuring that the security and reliability of the radios is not compromised.

Development of a Reusable Biological Sounding Rocket Payload
Nathanael Miller, Old Dominion University

The absence of significant gravitation during space travel has been observed to produce a variety of biological effects in astronauts. One possible effect is increased mobility related to diminished cell separation barriers. It has been proposed that exposure to microgravity can cause cancer cells to become more deformable, affecting their ability to spread (i.e. facilitating metastisis). Engineering students from Old Dominion University have been working with biological sciences faculty and students from Salisbury University in Maryland to develop a reusable payload system that can investigate how exposure to approximately 250 seconds of microgravity affects a type of mouse leukemia cell. As overall project manager, my presentation will discuss the design, development and testing of a reusable payload system that can transport up to eight experiment units through the launch and powered flight phases of the sounding rocket flight and, after subjecting the samples to microgravity, process them so that the influence of microgravity can be studied when the payload is recovered.

Computer Simulation of Radiation Shielding in Space by Polymeric Materials Abstract
Chris O’Neill, College of William and Mary

This research has included extensive analysis of the radiation shielding capabilities of various polymeric materials being developed for radiation shielding in manned space flight. Code validation was performed using NASA-Langley’s GRNTRN deterministic ion code to make comparisons against experimental data recorded at Brookhaven National Laboratory. GRNTRN is a deterministic ion code, which uses the Green’s Functions, to simulate the passage of particles through materials. This code currently can only model one type of particle at one specific energy, but is being expanded to simulate the actual space radiation environment. Additional work has been conducted using NASA-Langley’s HZETRN, which is their current generation space code. This code can simulate the actual space radiation environment, using both Galactic Cosmic Ray spectra and Solar Particle Event spectra. HZETRN was used to evaluate the shielding capabilities of various polymers. Further work has been conducted in developing a new code using GEANT4. GEANT4 is a simulation toolkit that models the passage of particles through matter and was developed by CERN, a nuclear accelerator facility. This new code is being used to make comparisons against NASA-Langley’s HZETRN, which will hopefully increase the level of confidence in HZETRN’s results. Lastly, research has been conducted in producing polymeric panels using prepreg. These panels consist of many layers of carbon fibers surrounded by polymers with good radiation shielding properties. Hopefully, these panels will be good multi-functional materials, since they could lead to panels with good mechanical and radiation shielding properties.

Study of Feasibility of New Configuration of Large-Scale Heat Pipes
Jessica Sheehan, University of Virginia

Heat pipes are a well-understood and widely implemented technology for small to moderate scale applications, such as microelectronics cooling. Currently, large-scale heat pipes are not being utilized as a solution in the quick removal of large amounts of concentrated heat. For this experimental investigation two large-scale heat pipe configurations are investigated. The first configuration is a 2’ x 2’ heat spreader plate (a type of heat pipe) and the second is an innovative heat pipe system that combines traditional heat pipes and heat spreader plates. It was shown that the large-scale heat spreader plate quickly becomes isothermal and works as a traditional heat pipe, thus demonstrating the ability of this configuration to efficiently spread out large amounts of deposited heat. Experimentation on the innovative heat pipe system resulted in the same positive outcome, demonstrating that the configuration works as a traditional heat pipe.