An ASIP architecture framework to facilitate automated
design space exploration and synthesis for Iterative Repair
solvers
Aravind Dasu and Jonathan Phillips
Electrical and Computer Engineering
Utah State University
4120 Old Main Hill
Logan, UT 84321 USA
Abstract-Autonomous dynamic event scheduling, using
Iterative Repair techniques such as those employed by CASPER
and ASPEN, is an essential component of successful space
missions, as it enables spacecraft to adaptively schedule tasks in a
dynamic, real-time environment. Event rescheduling is a
compute-intensive process. Typical applications involve
scheduling hundreds of events that share tens or hundreds of
resources. We are developing a set of tools for automating the
derivation of application-specific processors (ASIPs) from ANSI
C source code that perform this scheduling in an efficient
manner. The tools will produce VHDL code targeted for a Xilinx
Virtex 4 FPGA (Field Programmable Gate Array). Features of
FPGAs, including large processing bandwidth and embedded
ASICs and block RAMs, are exploited to optimize the design.
Efficiency is measured by combining the factors of execution
speed, circuit size, power consumption, and fault tolerance.
Iterative Repair problems are generally solved using a
combinatorial search heuristic, such as Simulated Annealing
(which is used by CASPER and ASPEN), Genetic Algorithms, or
Stochastic Beam Search. All of these methods operate by
gradually improving an initial solution over hundreds or
thousands of iterations. We propose an FPGA-based
architectural framework derived from ANSI C function-level
blocks for accelerating these computations. At a function level,
99% of the work done by any Simulated Annealing algorithm is
the repeated execution of three high-level steps: (1) generating a
new solution, (2) evaluating the solution, and (3) determining
whether the new solution should be accepted. The specifics of
how each step operates vary with the application and are
implemented in VHDL through data- and control-flow analysis
of the source C code. In this paper, we discuss specifics of an
architecture template for automated processor design.
I. INTRODUCTION
Field Programmable Gate Arrays (FPGAs) are becoming
increasingly popular as a platform of choice for spacecraft
computer systems. FPGA-based designs are highly cost
effective compared to Application-Specific Integrated Circuits
(ASICs), and provide more computing power and efficiency
than standard microprocessors. Current and planned NASA
missions that utilize FPGA technology include MARTE (Mars
Astrobiology Research and Technology Experiment) [1] and
the Discovery and New Frontier programs [2]. However, the
complexity of designing even reasonably efficient micro-
architectures on commodity FPGA devices is daunting for
engineers outside the realm of VLSI design.
Therefore, a methodology for automatic derivation of
FPGA-based application-specific processors for use in the
mission planning and event scheduling computations
performed by satellites and deep-space probes will mitigate
this steep barrier and facilitate their adoption to a larger
audience who do not have skills in VLSI design. Through our
methodology custom ASIPs on FPGAs can quickly be
designed which exploit the features of the scheduling
algorithms and maximize the efficiency of the system.
II. RELATED WORK
Our methodology leverages concepts from several different
research areas, including hardware implementations of
heuristic search techniques, the design of application-specific
instruction processors (ASIPs), and methods for performing
design space exploration for FPGA-based processors. Recent
advances in each of these fields are discussed in this section.
Iterative repair utilizes a combinatorial search heuristic,
such as a genetic algorithm (GA), simulated annealing (SA),
or a stochastic beam search (SBS), to arrive at a solution. In
theory, implementing these combinatorial search algorithms in
hardware could significantly speed up the search process.
Large amounts of parallelism and pipelining can be extracted
from GA and SBS, since deriving a new generation is largely
only a function of the previous generation.
FPGA-based GAs and SBS have been implemented for the
purposes of blind signal separation [3], filter design [4],
function interpolation [5], and speech recognition [6]. As long
as the solution length is kept reasonably small, this technique
in which entire solutions are passed between pipelined
modules works well. Iterative repair problems, however, are
complex enough that a solution can be hundreds of bytes in
length.
Design space exploration in the context of FPGA-based
architectures is a powerful tool. Exploring a design space is,
in essence, searching the combinatorial space of all possible
hardware architectures that can support a given function. The
goal is to identify the architecture that yields the best tradeoff