3D Computed Tomography

Transkript

3D Computed Tomography
Czech Technical University in Prague
Faculty of Nuclear Sciences and Physical Engineering
Department of Physical Electronics
3D Computed Tomography
A RESEARCH PROJECT
Author: Bc. Jan Kratochvı́la
Supervisor: Ing. Alexandr Jančárek, CSc.
Consultants: Doc. Ing. Ladislav Pı́na, CSc.
Academic year: 2007/2008
Declaration
With this, I declare that I have written this paper on my own, distinguished citations,
and used no other than the named sources and aids.
In Prague,
signature
i
Acknowledgments
I would like to thank to Ing. Alexandr Jančárek, CSc. for supervising my work and
each piece of advice. I also thank to Ing. Josef Prokop for guiding me when working
and for many useful hints.
I would further like to thank to my parents and sister for their support while I was
working. I am also very thankful to my girlfriend Jana for all her patience, understanding, and interest in my work.
ii
Abstract
The goal of the project is to build a computed tomographic scanner. It consists from
the following components. Shad-O-Box X-ray camera based on CMOS technology was
used as a detector. The source of the X-ray radiation was an X-ray tube Isovolt 420/5
which is a part of existing industrial tomographic scanner BT-400. During the project,
we explored various aspects of computed tomography realization – an acquisition of
projection images, a reconstruction in a computer, an influence of various effects on the
final image quality. Necessary software was coded in MATLAB and C language. As a
result of the work, a working experimental instrument was built. The resolution was
about 2.5 higher than of the tomograph BT-400.
Key words: CT, CAT, computerized tomography, X-Ray, reconstruction, filtered
backprojection, Radon transform, artefacts
Abstrakt
Cı́lem této práce je sestavit počı́tačový tomograf. Zařı́zenı́ použı́vá detektor rentgenového
zářenı́ Shad-O-Box, který je založený na technologii CMOS. Zdrojem zářenı́ je rentgenová
lampa Isovolt 420/5, která je součástı́ instalovaného průmyslového tomografu BT-400.
Během práce jsme zkoumali jednotlivé aspekty při praktické realizaci počı́tačové tomografie – snı́mánı́ projekcı́, rekonstrukce v počı́tači, vliv různých efektů na výslednou
kvalitu obrazu. Nezbytné počı́tačové programy byly vytvořeny v prostředı́ MATLAB
a v jazyce C. Výsledkem práce je fungujı́cı́ experimentálnı́ přı́stroj, jehož rozlišenı́ bylo
ve srovnánı́ s tomografem BT-400 zhruba 2,5 krát vyššı́.
Klı́čová slova: počı́tačová tomografie, rekonstrukce, filtrovaná zpětná projekce, rentgenové
zářenı́, Radonova transformace, artefakty
iii
insert original assignment
iv
Contents
1 Preface
1
2 Theoretical Part
2
2.1
2.2
X-ray radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
2.1.1
Attenuation in matter . . . . . . . . . . . . . . . . . . . . . . . .
2
2.1.2
Radiation scattering and absorption . . . . . . . . . . . . . . . . .
3
2.1.3
X-ray generation . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Computed tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2.2.1
Radon transformation . . . . . . . . . . . . . . . . . . . . . . . .
5
2.2.2
Fourier Slice Theorem . . . . . . . . . . . . . . . . . . . . . . . .
6
2.2.3
Filtered back-projection algorithm for parallel beam . . . . . . . .
8
2.2.4
Fan beam reconstruction algorithm . . . . . . . . . . . . . . . . .
9
2.2.5
Applications of computerized tomography . . . . . . . . . . . . . 10
2.2.5.1
Medicine . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.5.2
Other applications . . . . . . . . . . . . . . . . . . . . . 11
3 Solution
13
3.1
Block scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2
Used devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.1
Shad-o-Box X-ray camera . . . . . . . . . . . . . . . . . . . . . . 14
3.2.2
PXD1000 frame grabber . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.3
Rotation stage
3.2.4
X-ray tube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3
System geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4
Sample rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
v
3.4.1
3.5
3.6
Accuracy of the movement . . . . . . . . . . . . . . . . . . . . . . 21
Image capturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5.1
Pixels deinterlacing . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5.2
Correct exposure . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5.3
Image corrections . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5.4
Spot filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5.5
Beam hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Image reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.6.1
Fan beam geometry . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.6.2
Center of rotation calculation . . . . . . . . . . . . . . . . . . . . 32
3.6.3
Filtered back-projection . . . . . . . . . . . . . . . . . . . . . . . 32
3.7
Tomography measurements procedure . . . . . . . . . . . . . . . . . . . . 33
3.8
Scripts description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8.1
j avg.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8.2
j cor.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8.3
j init.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8.4
j load.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.8.5
j loadd.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.8.6
j median.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.8.7
j mksin.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.8.8
j mul.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.8.9
j norm.m
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.8.10 j rec.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.8.11 j show.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.8.12 j tomo.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.9
Used software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4 Conclusion
40
Bibliography
41
A Code listings
45
vi
List of Figures
2.1
A typical X-ray spectrum produced by a tube with a tungsten target . .
4
2.2
Projections of two cylinders from two different angles . . . . . . . . . . .
5
2.3
Line integrals of an object . . . . . . . . . . . . . . . . . . . . . . . . . .
6
2.4
Fourier slice theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
2.5
Back projection from different number of projections . . . . . . . . . . .
8
2.6
Filter influence on the back projection result . . . . . . . . . . . . . . . .
9
2.7
Parallel beam and fan beam geometry
2.8
CT image of a human brain . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.9
CT scanner for medical purposes . . . . . . . . . . . . . . . . . . . . . . 11
3.1
Block scheme of the system . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2
Shad-o-Box X-ray camera [1] . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3
PXD1000 digital frame grabber . . . . . . . . . . . . . . . . . . . . . . . 16
3.4
Rotation stage with the servo controller . . . . . . . . . . . . . . . . . . . 18
3.5
Isovolt 420/5 X-ray tube . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.6
Simulated X-ray tube spectrum at 100 kV . . . . . . . . . . . . . . . . . 20
3.7
Detail of the tomography system . . . . . . . . . . . . . . . . . . . . . . 21
3.8
Accuracy of the step size . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.9
j capture flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
. . . . . . . . . . . . . . . . . . . 10
3.10 Shad-o-Box channel layout and pixels deinterlacing . . . . . . . . . . . . 25
3.11 Flat panel linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.12 Histogram of a correctly exposed image . . . . . . . . . . . . . . . . . . . 27
3.13 Flat-field image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.14 Comparison of a corrected image to the original . . . . . . . . . . . . . . 28
3.15 Ring artefacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.16 Images: original image and spot-filtered image . . . . . . . . . . . . . . . 29
vii
3.17 Average attenuation of the radiation . . . . . . . . . . . . . . . . . . . . 30
3.18 Reconstructions using parallel and fan beam geometry . . . . . . . . . . 31
3.19 The influence of the center of rotation position . . . . . . . . . . . . . . . 32
3.20 Overview of the whole tomography system . . . . . . . . . . . . . . . . . 39
viii
Chapter 1
Preface
Computer tomography has been a powerful tool in visualizing the inner structure of
various bodies since its invention by Hounsfield in 1972. It uses penetrating radiation
(such as X-ray) which is detected after it passes through an object. When acquiring
multiple images (projections) of the object from multiple angles, we can reconstruct
a cross-section of the object. It does not damage the object (except of the harmful
influence of X-rays in high doses on living organisms). Computed tomography has
found its widest use in medicine, however, it is used in many other industries as well.
Science, archeology, material science, biology, and non-destructive testing are some of
them.
Tomographic techniques can be based on many different physical measurements like
attenuation of X-rays, attenuation of light, electrical conductivity, electrical capacity,
and many more. In the project, the X-rays were chosen because their are suitable for
imaging of a wide spectrum of objects.
The project explores various aspects in realizing the computed tomography in practice. It starts with four separate components – an x-ray detector, an x-ray source,
a rotation stage, and a computer. In the course of the project, all parts were linked
together into a working system which is able to produce images of tomographic reconstructions of objects under investigation.
In this paper, the procedure of building the instrument is described in detail. We
mention important findings and steps which helped to improve the system performance.
In the appendix, we include complete code listings of all programs used so that a reader
can completely understand the basis of the instrument.
1
Chapter 2
Theoretical Part
2.1
X-ray radiation
An X-ray radiation is a form of electromagnetic radiation with wavelengths between
10−9 −10−12 m which corresponds to the frequency range of 30×1015 Hz to 30×1018 Hz.
X-rays are produced when fast moving electrons hit a metal target where they loose their
speed and energy rapidly. Most of their kinetic energy is transformed to heat, however,
a part of it gives rise to X-rays. The faster the electrons move before the impact with
the metal target, the higher energy (and wavelength) the resulting radiation has.
The X-rays are able to penetrate (pass through) the matter which is opaque to
light. The photons of the radiation interact with the particles in the matter. We can
distinguish two different processes which occur: absorption and scattering.
2.1.1
Attenuation in matter
The denser the matter is, the more the X-rays are attenuated according to the BeerLambert’s law which states that the attenuation is exponential:
I = I0 exp(−µx),
(2.1)
where I0 is the intensity of the incident radiation, I is the intensity of the radiation after passing the material of thickness x, and µ is a constant which describes the
nature of the absorbing body and the X-ray radiation. The equation is valid only for
monoenergetic radiation and for narrow beams.
2
CHAPTER 2. THEORETICAL PART
2.1.2
3
Radiation scattering and absorption
The total attenuation expressed in the previous section is the sum of the following
effects [15]:
Photoelectric absorption Incident photon hits the atom, the photon disappears, and
an electron leaves the atom. The energy of the incident photon is used to release
the bond of the emitted electron and to give it kinetic energy. The energies of the
emitted electrons are characteristic for each chemical element (as it corresponds
to the binding energies of particular shells in the atom).
Rayleigh scattering The process occurs mainly at lower photon energies. The incident photon is deflected by the electrons of the matter without any change in the
energy of the atom and without emitting any electrons.
Compton scattering The main contribution to the total attenuation is formed by
the Compton scattering which takes place mainly at middle energies. A photon is
scattered inelastically and an atomic electron recoils out of the atom. A secondary
photon of reduced energy emerges after the interaction. For a single scattering
event, the process is given by a formula. However, when the photon passes through
the matter, multiple scattering occurs making it much more difficult to compute.
Pair production The effect occurs at high energies (larger than 1.02 MeV). An incident photon disappears and an electron-positron pair is created. The positrons
have a very short life; their annihilation is accompanied by the emission of radiation with a characteristic energy of 0.5 MeV.
2.1.3
X-ray generation
To generate X-rays, we need to establish the following conditions:
1. A source of stream of electrons which is mostly heated filament in a vacuum tube.
2. A way to accelerate the electrons. This is done by applying a high voltage between
the source of the electrons and the target.
3. A target which is hit by the electrons which must also be in the vacuum tube.
CHAPTER 2. THEORETICAL PART
4
Figure 2.1: A typical X-ray spectrum produced by a tube with a tungsten target (adopted
from [11])
2.2
Computed tomography
Computed tomography refers to an imaging method where a cross-section of an object
is visualized using projection data collected by illuminating the object from multiple
different directions.
Computed tomography has found its widest application in medicine where it enabled
the doctors to see anatomical section of patient’s organs and inner structures without
any damage to the patient. However, it is successfully used in many other industries of
human activity such as archeology, non-destructive testing, biology, and so on.
The basis of the computed tomography is a reconstruction of an image of an object
under investigation using projections of the object at different angles. A projection at
a given angle is the integral of the image in the direction specified by the angle [14].
This is shown in figure 2.2
The theoretical basis for the reconstruction was shown in 1917 by Johann Radon.
His theory was later named Radon transform. However, the first tomographic scanner
was invented by Godfrey Hounsfield in 1972. He shared a Nobel prize for it with Allan
Cormack. He showed that it was possible to get a reconstructed image even when the
5
CHAPTER 2. THEORETICAL PART
Figure 2.2: Projections of two cylinders from two different angles (adopted from [14])
theoretical requirements were not exactly met.
2.2.1
Radon transformation
The description in this section is based on [14]. A line integral is the integral of a
function along a line. To describe line projections, we will define a coordinate system
according to figure 2.3. The object in the figure is represented by a two-dimensional
function f (x, y) and each integral by the (θ, r) parameters. The equation of line AB in
figure 2.3 is
x cos θ + ysinθ = r
(2.2)
Then we define the line integral Pθ (r) as
Pθ (r) =
Z
(θ,r)line
f (x, y) ds
(2.3)
6
CHAPTER 2. THEORETICAL PART
Using a delta function, it can be rewritten as
Z +∞ Z +∞
Pθ (r) =
f (x, y)δ(x cos θ + ysinθ − r) dx dy
−∞
(2.4)
−∞
The function Pθ (r) is known as the Radon transform of the function f (x, y). A
projection at a given angle consists of a set of integrals when θ is a constant and r goes
through the whole object.
Figure 2.3: Line integrals of an object (adopted from [12])
2.2.2
Fourier Slice Theorem
The Fourier slice theorem tells us that the Fourier transform of the projection of a
two-dimensional function f (x, y) onto a line is equal to a slice through the origin of the
two-dimensional Fourier transform of that function which is parallel to the projection
line.
The following description is inspired by [13]. Let us define the 2D Fourier transform
of function f (x, y) as
F (u, v) =
Z
+∞
−∞
Z
+∞
−∞
f (x, y)e−j2π(ux+vy) dx dy
(2.5)
7
CHAPTER 2. THEORETICAL PART
Figure 2.4: Fourier slice theorem (adopted from [13])
The Fourier transform of the projection at angle θ is defined as
Z +∞
Pθ (r)e−j2πwt dt
Sθ (w) =
(2.6)
−∞
The simplest example is for θ = 0. First, consider the Fourier transform of the
object along the line in frequency domain given by v = 0. It simplifies equation 2.5 to
F (u, 0) =
Z
+∞
−∞
+∞
Z
f (x, y)e−j2πux dx dy
(2.7)
−∞
As the phase factor is no-longer dependent on y, the integral can be split
Z +∞ Z +∞
F (u, 0) =
f (x, y) dy e−j2πux dx
−∞
(2.8)
−∞
The expression in the brackets is the equation for a projection along lines with
constant x
Pθ=0 (x) =
Z
+∞
f (x, y) dy
(2.9)
−∞
When substituting the term 2.9 into 2.8, we get
F (u, 0) =
Z
+∞
−∞
Pθ=0 (x)e−j2πux dx
(2.10)
8
CHAPTER 2. THEORETICAL PART
The right-hand side is the 1D Fourier transform of the projection Pθ=0 . This gives
us the following relationship between the vertical projection and the 2D transform of
the object function:
F (u, 0) = Sθ=0 (u),
(2.11)
which is the simplest form of the Fourier slice theorem. If we had a sufficient number
of projections, took their Fourier transforms and then composed the 2D picture in the
frequency domain and took the inverse Fourier transform of that picture, we should
obtain a reconstruction of the original object. However, in practice it does not seem to
work because of numerical reasons. Instead, the filtered back-projection algorithm is
used in almost all straight rays applications.
Figure 2.5: Back projection from different number of projections: from the top left corner 1,
2, 4, 10, 180. The last image is the original.
2.2.3
Filtered back-projection algorithm for parallel beam
The filtered back-projection algorithm is derived be rewriting the Fourier slice theorem.
A rigorous derivation can be found in [14]. We are going to present more intuitive
approach here.
CHAPTER 2. THEORETICAL PART
9
Figure 2.6: Filter influence on the back projection result: the original picture (left), filtered
back projected reconstruction (middle), unfiltered back projection (right).
The reconstruction process is done in the following way. First, all the projections
are filtered in the frequency domain. This is done because otherwise we would get
rather blurry images (see figure 2.6). After filtering, each projection is smeared back
(or back projected) over the entire image plane. The next projection is rotated by the
appropriate angle and back projected over the entire image plane. Then, it is added
to the first one. After completing the procedure for all projections for all angles, the
reconstruction will arise. It is illustrated in figure 2.5 where in the first picture, there
is just one projection back projected over the entire image plane. The second picture
shows the sum of two projections (one at 0 degrees, the latter one at 90 degrees). The
effect of adding more projections can be observed.
A set of projection data for a given slice through an object is called a sinogram.
It is visualized as an image where the number of lines corresponds to the number of
projections taken. Each line then represents a projection at a given angle. The number
of columns is equal to the width of the detector. The reconstruction of a slice is
performed from the data recorded in a sinogram.
2.2.4
Fan beam reconstruction algorithm
There is an algorithm which converts the fan beam projection data into the parallel
beam projection data so that the previously derived procedure to reconstruct an image
from parallel beam data can be used. The difference between parallel beam projections
and fan beam projections is shown in figure 2.7. We consider the fan beam geometry
with equally spaced collinear detectors.
The algorithm called re-sorting is described in [14].
CHAPTER 2. THEORETICAL PART
10
Figure 2.7: Parallel beam and fan beam geometry (adopted from [14])
2.2.5
Applications of computerized tomography
2.2.5.1
Medicine
Computed tomography has found its widest application in medicine. CT together with
other tomographic methods such as positron emission tomography1 (PET) and magnetic
resonance imaging2 (MRI) are used to obtain detailed cross-sectional images of parts of
a human body. Since its introduction in the 1970s, CT has been used in the diagnosis
of a number of different diseases.
Figure 2.8: CT image of a human brain
1
Based on detecting positrons emitted by a radioisotope which was injected into the body on a
metabolically active molecule
2
Images are reconstructed based on the detection of a rotating magnetic field of hydrogen atoms.
A powerful magnetic field and radio waves are used to make the atoms rotate.
CHAPTER 2. THEORETICAL PART
11
CT images of the chest are good tools to diagnose changes in the lung tissue which
can show pneumonia, cancer or pulmonary embolism. Images of the heart are more
difficult to get because of the contractions of the heart. However, together with the
information from the ECG, it is possible to acquire one but at the cost of a relative
high exposure dose. It can detect coronary artery disease.
CT is also used in the area of abdomen and pelvis. The images can show kidney
stones, pancreatitis, appendicitis, bowel obstruction, and the stage of cancer.
Figure 2.9: CT scanner for medical purposes
2.2.5.2
Other applications
Metal castings tomography To assess the quality and possible defects in metal castings, computer tomography can be used to monitor the progression of solidification. Flaws such as porosity and segregation can be detected.
Two phase flow measurements When studying single phase flow, there is a good
theoretical background. The flow regimes are laminar, turbulent, and transition
between them. However, two phase flow (gas and liquid) is not yet an area where
flow parameters can be predicted theoretically. Tomographic techniques can be
used to identify flow regimes [16].
Archeology Computed tomography has already been recognized by some archaeologist and museum curators as an efficient tool for nondestructive studying of
archaeological artifacts.
CHAPTER 2. THEORETICAL PART
12
Material science X-ray CT reveals differences in density and atomic composition and
can therefore be used for the study of porosity, the relative distribution of contrasting solid phases and the penetration of injected solutions. As a non-destructive
technique, it is ideally suited for monitoring of processes, such as the movement
of solutions and the behavior of materials under compression. Because large numbers of parallel two-dimensional cross-sections can be obtained, three-dimensional
representations of selected features can be created. Various applications of X-ray
CT are in a wide range of disciplines, including petrology, soil science, petroleum
geology, geomechanics, and sedimentology [17].
Chapter 3
Solution
3.1
Block scheme
The system built was an instrument for performing computed tomography. It consists
of four main parts:
• X-ray source (tube)
• rotation stage
• X-ray detector
• computer
The X-rays emitted in the tube pass through the object under investigation while
the intensity of the beam is attenuated depending on the thickness of the object. The
signal passed is detected in the detector and the data are sent to a computer where
they are stored as a file. Then, the object is rotated by a single step and a new image
is acquired. After rotating about 360 degrees, the measuring is complete.
The next step is the computation of the data. The goal is to get a cross-section of
the object at a given height. After enhancing the quality of the acquired images (offset
and open-beam corrections), the sinograms are calculated. For each slice (cross-section)
of the object, one sinogram is necessary. From the sinograms, the final images (slices)
are computed using MATLAB function iradon.
13
CHAPTER 3. SOLUTION
14
Figure 3.1: Block scheme of the system
3.2
3.2.1
Used devices
Shad-o-Box X-ray camera
Shad-o-Box X-ray camera was used to detect the X-ray radiation. It is based on a
CMOS photo diode sensor which has 10 lines per millimeter resolution in a 2000 by
2048 pixel array. The sensor is produced for two energy ranges (10-50 kV and 10-160
kV). We used the extended version for up to 160 kV. The digital output is 12 bits. The
area of the sensor covers a square with a side of 10 cm.
The CMOS sensor inside the Shad-o-Box camera contains a direct-contact Gd2 O2 S
scintillator. The scintillator converts X-ray photons into visible light that is sensed by
the CMOS photo diodes. A thin graphite cover protects the sensor from accidental
damage as well as ambient light. The Shad-o-Box camera also contains lead and steel
shielding to protect the camera electronics from the X-ray radiation [1].
The image area of the sensor is scanned through eight parallel channels. The row
scan starts at the center of the active area and scans simultaneously toward the top and
the bottom. Each line is scanned in eight sections. The four bottom sections scan left
to right, and the top sections scan right to left. All eight sections are scanned in parallel
CHAPTER 3. SOLUTION
15
Figure 3.2: Shad-o-Box X-ray camera [1]
and then interleaved for transmission. The data must be deinterlaced in software to
restore the image [1].
Key parameters:
• 10 × 10 cm sensor
• 2000 by 2048 pixels
• 12 bit digitalization
• 10 lp/mm resolution
• 10-160 kV energy range
• pixel spacing 0.048 mm
3.2.2
PXD1000 frame grabber
The X-ray camera was connected to a digital frame grabber which is a device that
acquires the images from the sensor and provides the data to a computer through the
computer’s PCI bus. The PXD1000 frame grabber was used.
Here are some important features of the frame grabber [4]:
CHAPTER 3. SOLUTION
16
• 32 bit parallel interface
• synchronization to timing information provided by the camera or generation of
the timing signals for the camera
• exposure control of the camera
• configurable to work with a variety of different cameras
Figure 3.3: PXD1000 digital frame grabber (adopted from http://www.imagenation.com/)
Frame Grabber Libraries
It is possible to develop custom applications using the frame grabber. There are libraries
for Windows-based systems as well as for DOS 4GW. They are intended to use in C
programming language. The libraries are supplied together with the frame grabber;
there is no need to purchase them separately.
Library ilib32.lib is a small utility that manages the dynamic loading of the following
two DLLs.
CHAPTER 3. SOLUTION
17
Frame grabber library pxd 32.dll provides functions for controlling the hardware
settings of the frame grabber such as timing, camera modes, strobes, and triggers.
It also sends a signal to start capturing of images.
Frame library frame 32.dll provides functions for manipulation with images. It can
copy, load, save, and examine data in computer’s memory.
File Pxd1000.dll is the firmware that controls frame grabber PXD1000.
Configuration file xxx.cfg contains the settings of a particular model of camera [4].
All library functions are described in detail in [4].
3.2.3
Rotation stage
For rotating a sample, a rotation stage by Thorlabs was used. It is a worm driven device
which is controlled by T-Cube DC Servo Controller (TDC001) by the same company.
The controller is connected to a computer by USB interface. It has its own software
called APT User where the motion can be easily controlled. It also provides ActiveX
components which provide programmable interfaces that can be incorporated into client
applications (e.g. in MATLAB).
Here are some basic technical data:
• Continuous 360◦ rotation
• Optical encoder with 12,288 pulses/revolution
• Vertical load 11 kg (25 lbs)
• Minimum incremental motion 2.19 arc sec
• Speed range 6◦ /sec-22 arc sec/sec
3.2.4
X-ray tube
Since there was not a separate X-ray tube available which we could use only for the
purposes of our device, we used the X-ray tube from tomographic scanner BT-400
CHAPTER 3. SOLUTION
18
Figure 3.4: Rotation stage with the servo controller
which is installed in our department. It is Isovolt 420/5, metal-ceramic tube with
oblique anode and beryllium window. The tube has double focus and oil-cooled anode.
Even though the maximum voltage of the tube is 420 kV, we could use only 160 kV
maximum because the X-ray camera range was limited by this energy.
Here are some basic technical data: [6]
• Maximum voltage: 420 kV
• Tube current at max. tube voltage: 5.3 mA (large focal spot), 2.3 mA (small
focal spot)
• Focal spot size (IEC 336): 1.5 mm (large), 0.8 mm (small)
• Emergent beam angle: 20◦ × 40◦
• Outer dimensions: 798 mm long, 362 mm in diameter
• Weight: 75 kg
As from all X-ray tubes, the radiation emitted from Isovolt 420/5 x-ray tube is not
monochromatic and consists of a broad spectrum of energies. In figure 3.6, there is a
simulated spectrum of the tube at 100 kV which was obtained by Monte Carlo method
(code GEANT4) in Department of Dosimetry and Application of Ionizing Radiation
of Faculty of Nuclear Sciences and Physical Engineering in Prague. The simulation is
based on the parameters of the tube.
CHAPTER 3. SOLUTION
19
Figure 3.5: Isovolt 420/5 X-ray tube [6]
3.3
System geometry
The instrument was built within the industrial tomographic scanner BT-400 since we
had to use the X-ray tube from it. To mount the rotation stage and the detector, we
built an aluminium construction. The distance between the X-ray tube focal spot and
the center of the rotation was 72.5 cm. The distance between the rotation stage and
the detector was 5 cm.
3.4
Sample rotation
The principle of the computed tomography is to take projections of the sample at many
different angles. To achieve this, either the sample or the source and the detector
are rotated. In our setup, the sample was rotated using the rotation stage CR1/MZ6 together with the servo controller TDC001 by Thorlabs company. The controller
was connected to a computer using the USB interface. We could not use the supplied
software for controlling the motion because we needed to acquire the images after each
step. Therefore, we used the ActiveX components to control the motion of the rotation
stage directly from MATLAB.
First, script j_init is run to initialize the ActiveX component. Then it is possible
to use a wide range of the available functions. In the following table, there is a list of
functions we used.
20
CHAPTER 3. SOLUTION
X−ray Tube Spectrum
0.09
0.08
0.07
Relative Count
0.06
0.05
0.04
0.03
0.02
0.01
0
0
10
20
30
40
50
Energy [keV]
60
70
80
90
100
Figure 3.6: Simulated X-ray tube spectrum at 100 kV
Function
Description
SetJogStepSize
Sets the size of one jog
GetJogStepSize StepSize Returns the actual step size
MoveJog
Makes one jog
MoveAbsoluteRot
Rotates to an absolute position
GetPosition Position
Returns the actual position
The following MATLAB code will initialize the rotation stage control using script
j_init and rotate it by 5 degrees.
channel = 0;
left = 1;
h = j_init();
h.SetJogStepSize(channel, 5); pause(0.1);
h.MoveJog(channel, left); pause(0.1);
CHAPTER 3. SOLUTION
21
Figure 3.7: Detail of the tomography system
3.4.1
Accuracy of the movement
The accuracy of the rotation stage movement was examined using the function
GetPosition_Position which returns the actual position of the rotation stage. The
measurement is based on an optical encoder. There was a need to make 360 steps with
a size of a step equal to one degree. We set the step size to 1 and made 360 steps and
measured the position change after each step. The mean of the step size was 0.9942
with a standard deviation of 0.0013. When we set the step size to 1.0055, the mean
was 0.9999 with a standard deviation of 0.0013. This yields 0.01% accuracy. Thus, we
used the step size of 1.0055 for all measurements.
3.5
Image capturing
The Shad-o-Box X-ray camera was supplied with a Windows software called ShadoCam. It provides a simple user interface to capture, display, and save images from the
22
CHAPTER 3. SOLUTION
Step size of the rotation stage: 1.0055 degree
1.003
Mean: 0.9999
Standard deviation: 0.0013
1.002
Step size [degree]
1.001
1
0.999
0.998
0.997
0
50
100
150
200
Step count
250
300
350
Figure 3.8: Accuracy of the step size
camera. There are several parameters which can be set: model of the camera, length of
the exposure (integration time), averaging over multiple images, offset and flat field corrections, pixel map correction, and so on. The software communicates with the frame
grabber and outputs processed data which correspond to an X-ray image (radiology) of
the examined object.
However, the program needs an interaction with a user. There is neither a command
line mode nor a batch mode. For our purposes it was necessary to be able to acquire
a large number of images without any user interaction. Thus, ShadoCam software was
not suitable for this task. It was necessary to code a simple program because there was
not any other utility supplied to do this. We called it j_capture.
Requirements on j_capture:
• Run in command line mode
• Capture data from the frame grabber
• Deinterlace the acquired data
23
CHAPTER 3. SOLUTION
• Accept a parameter from the command line which represents the name of the file
to save the image to
• Save an acquired image on the hard drive
Figure 3.9: j capture flowchart
The program was written in C language using developing environment Dev-C++
under GNU license (see www.bloodshed.net). To control the frame grabber, we used
the libraries supplied with the frame grabber (described in section 3.2.2 on page 16).
After the data is loaded from the frame grabber, it is necessary to deinterlace pixels.
The function to do this is provided by Rad-Icon company which produces the Shad-oBox X-ray camera. It is a part of ShadoCam Imaging Library which must be purchased
separately for around 475 USD. Using the information in the Shad-o-Box documentation, we coded the function ourselves. We also needed a configuration file which was
CHAPTER 3. SOLUTION
24
necessary for the proper function of the frame grabber together with the X-ray camera.
Mr. Schmid-Fabian from Rad-Icon Germany has kindly sent us the file.
The X-ray tube was equipped with a collimator for purposes of the BT-400 tomographic scanner which narrowed the radiation beam into a narrow beam. Thus, the flat
panel detector was illuminated only partly. From its 2048 × 2000 pixels, only an area
of 276 × 2000 was hit by the X-rays. To save the hard drive space, we did not save the
whole area of the detector but only the narrow strip.
Program j_capture accepts two command line parameters. The first one is required
and it is the file name. The second one is optional and is either bin or empty. If empty,
the acquired image is saved in PNG format, if bin is included, the image is saved in
raw data. The latter option is much faster.
3.5.1
Pixels deinterlacing
In the Shad-o-Box 4K camera, the individual channels are scanned in parallel and
multiplexed for transmission to the PC. This maximizes the camera’s frame rate without
requiring additional bulky interface cables for the extra channels. It also means that
the information that arrives in the image buffer is scrambled. The pixel deinterlacing
function must unscramble the image buffer so that all the pixels are in the correct place
in the image [2].
The Shad-o-Box 4K camera contains eight 512-column by 1024-row sensors arranged
in a 2x4 matrix. The imagers are separated by one horizontal and three vertical 100 µm
gaps. The data from the eight camera channels are interlaced together as shown in the
figure below. Note that the upper row of sensors (ch. 4-7) reads out in the opposite
direction from the bottom row [2].
Following the information above, we coded a simple loop in j_capture which deinterlaced the image.
3.5.2
Correct exposure
In order to get well exposed images which are not saturated or all black, the correct
exposure has to be determined so that the image captures all details of the sample. The
exposure consists of three adjustable parameters:
CHAPTER 3. SOLUTION
25
Figure 3.10: Shad-o-Box channel layout and pixels deinterlacing (adopted from [2])
• integration time (set on the flat panel)
• voltage (set on the X-ray tube)
• current (set on the X-ray tube)
The integration time is the number of seconds that the photo diodes in the sensor
array collect charge. The period can vary from 0.367 sec to 6.7 sec. The shortest period
is given by the limitation of the Shad-o-Box X-ray camera, the longest is the limitation
of the PXD1000 frame grabber. The time is set manually in the configuration file of
the camera SB4KPX.cam. There is a line
1.2,
//frame_period
where the number in the beginning of the line is the integration time in seconds.
Both voltage and current are set on the X-ray tube control panel and influence
the characteristics of the X-ray radiation emitted. The number of emitted photons
is influenced by current and it corresponds to the brightness of the image (together
with the integration time). The voltage controls X-ray penetration as it determines the
energy range of the radiation. It can be related to the contrast of the image.
26
CHAPTER 3. SOLUTION
As the detector uses 12 bit digitalization, the output values range from 0 to 212 =
4096 where 0 means black and 4096 white. To check the correct exposure, histograms1
of the acquired images were examined before each measurement. We usually took
one projection of the sample in the direction with the most of material and set the
parameters so that the exposure was correct.
Average pixel intensity of open beam image for different exposure times at 120 kV
4000
3500
Intensity
3000
2500
2000
1500
1000
500
Average pixel intensity
Fitted line
0
0.5
1
1.5
2
2.5
Time of exposure [sec]
Figure 3.11: Flat panel linearity
We were interested in the linearity of the detector, i.e. if the pixels are linear along
the whole range of the detector starting from the zero intensity and ending at saturation.
In figure 3.11, the average value of pixel intensity at 120 kV is depicted depending on
the integration time. It can be observed that the detector is linear from 700 to around
3500 on the intensity axis. In all measurements, we tried to stay inside this range to
get as good results as possible.
1
Histogram is a graphical display of frequencies of particular values in the image. On the x axis,
there are intensities and on the y axis, there are numbers of pixels with a given intensity.
CHAPTER 3. SOLUTION
27
Figure 3.12: Histogram of a correctly exposed image (a thumbnail of the image is included)
3.5.3
Image corrections
The Shad-o-Box X-ray detector which we used is based on CMOS technology. As
opposed to CCD, CMOS active pixel sensors are less linear in their response. As the
detector measures a change in voltage corresponding to the signal change directly in
the pixel (by means of a source-follower FET), any non-linearities in the transfer curve
of the amplifier will be magnified to the overall gain response of the detector [9].
To decrease the influence of the non-linearity effects on the resulting picture, it is
necessary to employ pixel corrections methods. The standard gain correction consists
of two steps – a dark offset calibration and gain correction (also referred to as flatfield or open-beam correction). The first one compensates for the readout noise, ADC
offset, and dark current. The dark offset image is taken before of after tomography
measurements with no radiation applied and with the same integration time as the one
which will be used during the measurements. The dark offset image is simply subtracted
from each acquired image.
The flat-field images are obtained at some arbitrary input signal level (usually
around 50 % of the saturation signal but for best results, it should be similar to the
signal levels expected in the final images). Each final image is divided by the flat-field
28
CHAPTER 3. SOLUTION
Figure 3.13: Flat-field image
image and multiplied by the average value of the flat-field image. The procedure to
derive the technique is outlined in [9].
Since every final image is processed using the dark offset and flat-field image, every
imperfection or error in those images will show in all the images (resulting in ring
artefacts, for example). Thus, it is very advisable to take many calibration images (10
to 20) and then calculate their average to reduce noise and statistical errors.
Figure 3.14: Comparison of a corrected image to the original
3.5.4
Spot filtering
There are some pixels in the detector which are defective – their output level is either
saturated, dark or very different from all the surrounding pixels. Such pixels cause ring
artefacts in the final reconstruction.
We have implemented a spot filter to filter out these pixels. Since it consumes a lot
of computation time, we used it to filter only the flat-field images which leads to good
results in removing the ring artefacts.
CHAPTER 3. SOLUTION
29
Figure 3.15: Ring artefacts: reconstruction without and with spot-filtered flat-field correction
The spot filter is a modified median filter. Whereas the median filter acts on all pixels
and replaces them with the median value of pixels in some surroundings (typically 3×3),
we implemented a thresholded median filter. It replaces a pixel value only then, if its
value is some constant times different from the mean value of pixels in the surroundings.
This performs better than the median filter. From the experiments, the constant was
set to 0.01.
Figure 3.16: Images: original image and spot-filtered image
3.5.5
Beam hardening
The fact that the radiation is not monochromatic, can lead to an image degradation
called beam hardening. It stands from the fact that the attenuation of the radiation in
CHAPTER 3. SOLUTION
30
the material depends on the wavelength of the radiation. Longer wavelengths (softer
radiation) are attenuated more easily than the shorter wavelengths. The effect leads to
beam hardening artefacts and cupped appearance.
In our results, we did not find any significant beam hardening artefacts so we did not
implement any correction. We started a few measurements with an aluminium wedge
to show how the attenuation depends on the thickness of the material. We found out
that it is almost a perfect exponential (see 3.17).
Figure 3.17: Average attenuation of the radiation depending on the thickness of the material
However, the figure shows the average values of all pixels. Further work and measurements are necessary to implement a filter which will correct beam hardening effects
for individual pixels. We did not realize it because it is a complex problem and we did
not have enough time for it.
CHAPTER 3. SOLUTION
3.6
31
Image reconstruction
After images over the whole cycle (360 degrees) have been acquired, the image reconstruction can be carried out. The reconstruction is based on the Radon transform and
filtered back-projection as described in section 2.2.3. It uses MATLAB script iradon
which is a part of basic MATLAB distribution.
3.6.1
Fan beam geometry
Before iradon can be applied, the conversion of the measured data from the fan beam
geometry to the parallel beam geometry. The reason is the input requirements of the
iradon function. It calculates the reconstructions from the parallel beam projections
whereas we acquire the data using the fan beam because the source of the radiation is
a small spot of the X-ray tube. In section 2.2.4, the difference is clarified.
To convert the sinogram which we acquired using the fan beam, script fan2para was
used. It is a part of the Image Processing Toolbox of MATLAB. The most important
parameter to the function is the distance between the center of rotation (the sample) and
the source of the radiation. It determines the geometry of the system. The parameter
is in pixels. To calculate it, we need to know the size of the pixel of the detector. As
the detector is 100 mm wide and contains 2000 pixels, 1 pixel corresponds to about 0.05
mm.
Figure 3.18: Reconstruction difference between not converted data (left) and fan beam data
transformed to the parallel beam data (right)
CHAPTER 3. SOLUTION
3.6.2
32
Center of rotation calculation
The next step is to determine the center of rotation position in the images. It is
essential that the center of rotation of the sample is perfectly aligned with the center of
the detector (image). Best effort is made to align the physical geometry of the system
so that the rotation stage axis is in the middle of the detector. However, it is hard to
do with a pixel accuracy. It is better to use an algorithm which calculates the center of
rotation from the measured data.
Figure 3.19: The influence of the center of rotation position. On the left: center of rotation
wrongly placed by 12 pixels. On the right: Correct position of the center of
rotation.
Script j_cor finds the center of rotation from a sinogram. Lines number 1 and
181 in the sinogram image correspond to the projections at an angle of 0 degree and
of 180 degrees. These projections are the same but mirrored around the axis of the
rotation (and flipped). First of them is flipped back and then moved along the latter.
The differences between the points are calculated which gives a set of points with a
significant minimum. This is where the center of rotation of both images coincide [7].
3.6.3
Filtered back-projection
After all three previous preparatory steps have been performed, the sinogram is passed
to the iradon function which calculates the final reconstruction. The algorithm is as
follows:
1. Projections are filtered. This is an essential step as the reconstructions without
the filtering would appear blurred. There is a wide range of filters to choose
CHAPTER 3. SOLUTION
33
from: Ram-Lak, Shepp-Logan, Hamming, Hann, and none. They differ in speed
and efficiency. The filter is designed directly in the frequency domain and then
multiplied by the Fourier transform of the projections. The projections are zeropadded to a power of 2 before filtering to prevent spatial domain aliasing and
to speed up the Fourier transform. The ideal filter is a ramp filter which is not
physically viable so its cropped version (Ram-Lak) is used. As the Ram-Lak filter
is sensitive to noise in projections, we used the Hamming filter which multiplies
the Ram-Lak filter by a Hamming window [10].
2. Backprojection. The filtered projections are backprojected to obtain the final
reconstruction. It is done in a loop over the angles of rotation. Interpolation is
necessary during the backprojection; we used linear interpolation because of the
speed.
The time to compute the backprojection was quite long depending on the machine
which it was run on. Generally, it was rather in minutes. The most time consuming step
was the backprojection. The solution is to use a different reconstruction algorithm or to
code the backprojection more effectively (there is a function iradon_speedy available
on the internet which is a small modification of the iradon function and which is based
on a C-language code compiled as a MATLAB mex file).
3.7
Tomography measurements procedure
In this section, we are going to describe the overall procedure to get a final image (slice,
cross-section) of the object.
1. Put the object on the rotation stage. Make sure that it will not move outside
the field of view of the detector during the rotational motion.
2. Initialize the rotation stage control: Start MATLAB and type h = j_init().
3. Set the correct exposition: Turn on the X-ray radiation (depending on the
material we start at 120 kV, 2 mA), set the integration time (start with 1 second), and acquire an image of the object (dos(j_capture ’test.raw’ bin)).
34
CHAPTER 3. SOLUTION
Load the test image (test = j_load(’test.raw’)) and display its histogram
(hist(test(:), 2^12)). Adjust the X-ray voltage and/or current and the integration time to get the correct exposure (see section 3.5.2). For best results,
rotate the object so that the thickness of the object in the direction of the X-rays
is the highest and set the exposure for this angle.
4. Acquire the dark offset images for calibration: While X-rays are turned
off, acquire at least 10 dark offset images. Remember that the integration time
should be the same as for all the other measurements (as determined in the above
step). Code: j_mul(’off’, 10).
5. Start the image acquisition: Turn on the X-rays and start the acquisition. Use
code [dev, ang] = j_tomo(h). This step will take the most time depending on
the integration time (typically around 30 minutes). The system will rotate the
sample about 360 degrees in 360 steps and after each step, it will take an X-ray
image which is saved on the hard drive in raw format. It uses numbering 001.raw
to 360.raw.
6. Acquire the flat-field image: Turn off the X-rays, remove the sample from the
rotation stage, turn on the X-rays, and acquire the flat-field image. The input
signal level should be similar to the one used in previous measurement but the
detector must not be in saturation. Again, it is important to take more images
because of the noise and statistical errors. Code: j_mul(’flat’, 10).
7. Average the correction images: Calculate the average of the dark offset
and flat-field images to improve the quality.
Code: j_avg(10, ’off’) and
j_avg(10, ’flat’). This creates files off.raw and flat.raw on the hard drive
with the images to be used for pixel corrections.
8. Normalize the images: Use code j_norm(360) to perform pixel corrections on
the acquired images. It creates a new set of normalized pictures in raw format
starting with x000.raw and ending with x360.raw.
9. Make the sinogram: Use code sin = j_mksin(360, 50) to build a sinogram.
In this case, it will be the 50th line (slice).
CHAPTER 3. SOLUTION
35
10. Reconstruct the cross-section (slice): Use code rsin = j_rec(sin). In the
matrix rsin, there is the final image which can be displayed by j_show(rsin).
3.8
Scripts description
In this section, we will describe all MATLAB scripts which were coded in the course of
the project. The codes are listed in the appendix. The order is alphabetical.
3.8.1
j avg.m
Syntax
j_avg(count, type)
Example j_avg(10, ’off’)
The script loads files in raw format, calculates its average, and saves the average file
on the hard drive. The filename scheme is the following: the filename starts with the
type prefix, then a three-digit number follows and the extension is .raw. For example,
if the script is called with j_avg(10, ’off’), it loads files off001.raw to off010.raw.
The sequence of the files is usually created with script j_mul. The purpose of averaging
is to decrease noise and statistical errors in the images.
3.8.2
j cor.m
Syntax
cor = j_cor(sinogram)
Example cor = j_cor(sin))
The script returns the position of the axis of rotation. The algorithm is described
in 3.6.2. The script is called from j_rec when reconstructing the final image from its
sinogram.
3.8.3
j init.m
Syntax
h = j_init()
Example
h = j_init()
CHAPTER 3. SOLUTION
36
Initializes the control of the rotation stage. Returns a handle to the ActiveX control
component which is used for calls to the available functions.
3.8.4
Syntax
j load.m
im = j_load(filename)
Example im = j_load(’001.raw’)
Loads an acquired image by program j_capture. It must be rotated since the detector is physically rotated because of better installation to the supporting construction.
The dimensions of the image are 276 × 2000 pixels.
3.8.5
Syntax
j loadd.m
im = j_loadd(filename)
Example sin = j_load(’sin001.raw’)
Loads a sinogram saved by script j_mksin. The dimensions of the sinogram are
360 × 2000 pixels. The difference between j_load and j_loadd are the dimensions and
the type of the numbers (uint16 versus double).
3.8.6
Syntax
j median.m
img = j_median(img)
Example flat_field = j_median(flat_field)
The script is used as a spot filter to filter out defective pixels from the image. It is
applied in script j_norm on the flat-field image. The algorithm is described in detail
in 3.5.4.
3.8.7
Syntax
j mksin.m
sin = j_mksin(proj_count, line))
Example sin = j_mksin(360, 50)
The script loads a particular line from each of the 360 projections and puts it into
a single file. This results in a sinogram of a given slice. The file is also saved in format
sin050.raw. To load a sinogram, script j_loadd is used.
CHAPTER 3. SOLUTION
3.8.8
37
j mul.m
Syntax
j_mul(type, count)
Example j_mul(’off’, 10)
The script acquires multiple images from the detector at the same settings and
saves it as files. It is used to get dark offset and flat-field images. After the images are
acquired, script j_avg is used to average the values.
3.8.9
j norm.m
Syntax
j_norm(proj_count)
Example j_norm(360)
Script j_norm performs the image corrections described in section 3.5.3 and applies
the spot filter. First, it loads the dark offset and flat-field images. Then it subtracts
the dark offset image from the flat-field image and spot filter the flat-field image. Then,
in a loop, it reads all the acquired images in raw format, subtracts the dark offset
image from them, divides them by the flat-field image, and saves the resulting images
on the hard drive in the raw format again. The files start with x000.raw and end with
x360.raw.
3.8.10
Syntax
j rec.m
im = j_rec(sin)
Example im = j_rec(sin))
The script computes the reconstruction from a sinogram based on filtered backprojection. The algorithm is described in section 3.6.
3.8.11
Syntax
j show.m
im = j_rec(sin)
Example im = j_rec(sin))
The script displays an image. It improves brightness and contrast by shifting the
minimum value of the image to 0 by subtracting the minimum value from all pixels of
the image. The maximum value of the image should be 255 which is accomplished by
multiplying the image by 255/(maximum value).
CHAPTER 3. SOLUTION
3.8.12
Syntax
38
j tomo.m
[deviations, angles] = j_tomo(h)
Example [dev, ang] = j_tomo(h)
The script rotates the sample by the full cycle (360 degrees) making 360 steps. Each
step has 1 degree (in reality, 1.0055 is sent to the software, see 3.4.1). After each step
is made, the image is acquired using program j_capture. Angles and deviations from
the ideal size of the step are also recorded so that later it can be checked if there was
no error during the motion. Function GetPosition_Position and MoveAbsoluteRot
are used to rotate the rotation stage. It accepts the only parameter which is a handle
to the ActiveX control component of the rotation stage. The handle is obtained as an
output of script j_init.
3.9
Used software
In this section, we describe different software tools which we used while working on this
project. It may serve other people as a source of inspiration what kind of software to
use. The software descriptions are adopted mainly from Wikipedia.
TeXnicCenter is a free open source IDE2 for the LATEX typesetting language. It was
used together with the MiKTeX distribution. It allows the user to type documents
in LATEX and to compile them in PDF, DVI or PS.
Dev-C++ is a free IDE for programming in C/C++.
Matlab is a numerical computing environment and programming language. Created
by The MathWorks, MATLAB allows easy matrix manipulation, plotting of functions and data, implementation of algorithms, creation of user interfaces, and
interfacing with programs in other languages.
Diagram Designer is a free flowchart creating software designed by Michael Vinther.
It can also create diagrams and slideshows.
2
Integrated Development Environment is a software tool which facilitates coding in different pro-
gramming languages
CHAPTER 3. SOLUTION
39
GNU Source-highlight is a command line tool for syntax highlighting for various
languages and output formats. It is an open-source software.
Figure 3.20: Overview of the whole tomography system
Chapter 4
Conclusion
During the course of the work, we managed to build a system which is now able to
visualize the inner structure of bodies under investigation. We started from scratch
having just the components of the resulting instrument – an X-ray detector and a
source, a rotation stage, and a computer. The components were physically put together
and software to control each of them was written. We used C language for controlling
the detector and MATLAB for all other work. MATLAB was also the environment
from which the system was controlled.
We explored various effects on image quality of the reconstruction images. We found
a couple of different artefacts which were present in the reconstructions and found a way
how to limit them. Among others, let us mention the ring artefact and the influence of
a correctly determined center of rotation.
The tomographic scanner was built within an existing tomographic scanner BT400 because we used its X-ray tube. We compared the results from the newly built
instrument and BT-400.
In the theoretical part of the project, we described some basic properties of X-ray
radiation, its generation, and attenuation effects. There is also a brief explanation of
theoretical background of tomographic reconstruction methods – the Radon transform,
Fourier Slice Theorem, and the filtered back projection.
There is a number of areas where the work can be extended. Procedures for better
image quality can be implemented such as a beam hardening filter. The speed of the
system should be improved; probably by rewriting the software framework in C language
instead of MATLAB. A graphic user interface can be added. Also, we were concerned
40
CHAPTER 4. CONCLUSION
41
only with reconstructions of single slices. A next step would be 3D volume rendering
and also reconstruction from cone beam data rather than fan beam data.
Bibliography
Electronic sources
[1] Rad-Icon Imaging Corp.: Shad-o-Box 4K [online]. 2004. [cited April 1, 2008].
Available on the Internet: http://www.rad-icon.com/pdf/ShadoBox4K.pdf
[2] Rad-Icon Imaging Corp.: ShadoCam Imaging Library [online]. 2004. [cited
April 30, 2008]. Available on the Internet:
http://www.rad-icon.com/pdf/ScManual.pdf
[3] Rad-Icon Imaging Corp.: AN03: Guide to Image Quality and Pixel
Correction Methods [online]. 2002. [cited May 2, 2008]. Available on the Internet:
http://www.rad-icon.com/pdf/Radicon_AN03.pdf
[4] Imagenation Corporation: PXD1000 Digital Frame Grabber User’s Guide
[online]. 1999. Version 1. [cited April 1, 2008]. Available on the Internet:
http://www.imagenation.com/imagena/dnfiles/pxd/1000-02.pdf
[5] Thorlabs Ltd.: T-Cube DC Servo Motor Driver [online]. 2007. [cited April 1,
2008]. Available on the Internet:
http://www.thorlabs.com/Thorcat/15700/15797-D01.pdf
[6] Agfa NDT Pantak Seifert GmbH & CO. KG: Isovolt 420/5 Product
Information [online]. 2002. [cited May 1, 2008]. Available on the Internet:
http://www.oniko.com.ua/rus/eresco/is/4205.pdf
[7] Manuel Dierick: Tomographic Imaging Techniques using Cold and Thermal
Neutron Beams [online]. 2005. [cited June 10, 2008]. Available on the Internet:
https://archive.ugent.be/retrieve/3048/doctoraat.pdf
42
BIBLIOGRAPHY
43
[8] Rad-Icon Imaging Corp.: AN03: Guide to Image Quality and Pixel
Correction Methods [online]. 2001 - 2002. [cited May 11, 2008]. Available on the
Internet: http://www.rad-icon.com/pdf/Radicon_AN03.pdf
[9] Rad-Icon Imaging Corp.: AN08: Polynomial Gain Correction for RadEye
Sensors [online]. 2003. [cited May 11, 2008]. Available on the Internet:
http://www.rad-icon.com/pdf/Radicon_AN08.pdf
[10] The MathWorks, Inc.: MATLAB Documentation: iradon [online]. 1984-2008.
[cited June 11, 2008]. Available in MATLAB in help.
[11] The Open University: Imaging in medicine : X-ray imaging : Components of
an X-ray unit [online]. 2005. [cited June 24, 2008]. Available on the internet:
http://openlearn.open.ac.uk/mod/resource/view.php?id=254685
[12] Wikipedia contributors: Tomographic reconstruction [online]. 19 May 2008.
[cited June 24, 2008]. Available on the internet:
http://en.wikipedia.org/wiki/Tomographic_reconstruction
[13] Optical Engineering Laboratory, University of Warwick: Fourier
Slice Theorem [online]. November 26, 2002. [cited June 24, 2008]. Available on
the internet:
www.eng.warwick.ac.uk/oel/courses/undergrad/lec13/fourier_slice.htm
Print publications
[14] A. C. Kak and Malcolm Slaney: Principles of Computerized Tomographic
Imaging. Society of Industrial and Applied Mathematics, 2001, 327 pages. ISBN
978-0-898714-94-4.
[15] R. Halmshaw: Industrial Radiology. Theory and practice. 2nd edition.
Chapman and Hall, 1995, 303 pages. ISBN 0-412-02780-9.
[16] Prabhat Munshi: Computerized Tomography for Scientists and Engineers.
Anamaya Publishers, New Delhi, India, 2007, 229 pages. ISBN 1-4200-4793-0.
BIBLIOGRAPHY
44
[17] F. Mees, R. Swennen, M. Van Geet and P. Jacobs: Applications of X-ray
Computed Tomography the Geosciences. GSL Special Publications, 12 August
2003, 243 pages. ISBN 1-86239-139-4.
Appendix A
Code listings
The following are the code listings of the MATLAB scripts and C program j_capture
coded and used during the project.
j avg.m
0001 function j avg(count, type)
0002 %j avg(count, type)
0003 %Averages the projections
0004
0005 avg = zeros(276, 2000);
0006
0007 for i = 1 : count
0008
filename = [type sprintf(’%2.3i’, i) ’.raw’];
0009
tmp = j load(filename);
0010
avg = avg + tmp;
0011 end
0012
0013 avg = avg ./ count;
0014 filename = [type ’.raw’]
0015 fid = fopen(filename, ’w’);
%open file
0016 s = ( fwrite(fid,avg,’uint16’) )’;
0017 fclose(fid);
0018
j cor.m
0001 function cor = j cor(sinogram)
0002 %cor = j cor(sinogram)
0003 %Returns the center of rotation of the current set of projections
0004 %represented by sinogram
0005 %Computes from projections at angle 0 and 180 degrees which should be the
0006 %same images but mirrored about the center of rotation
45
APPENDIX A. CODE LISTINGS
0007
0008 %Projection at angle 0
0009 vec1 = sinogram(1, :);
0010
0011 %Projection at angle 180
0012 vec2 = sinogram(181, :);
0013
0014 %Flip the projection at angle 180
0015 vec2 = vec2(end:-1:1);
0016
0017 %Window of 1000 pixels
0018 vec2 = vec2(500:1500);
0019
0020 dif = zeros(1,1000);
0021
0022 %Calculates differences when moving the window vec2 along vec1
0023 for i = 1 : 1000
0024
dif(i) = sum(abs(vec1(i: i+1000) - vec2));
0025 end
0026
0027 %Center of rotation is where the minimum of the differences is
0028 cor = floor(1000 - (500 - find(dif == min(dif)))/2);
j init.m
0001 function h = j init()
0002 %h = j_init()
0003 %Initializes the rotation stage control
0004 %returns handle to the ActiveX control
0005
0006 h = actxcontrol(’MGMOTOR.MGMotorCtrl.1’); pause(0.1);
0007 h.set(’HWSerialNum’, 83812330);pause(0.1);
0008
0009 h.StartCtrl;pause(0.1);
j load.m
0001 function im = j load(filename)
0002 %im = j_load(filename)
0003 %Loads an image acquired by j capture program
0004
0005 fid = fopen(filename,’rb’);
%open file
0006 im = fread(fid,[276,2000],’uint16’);
0007 im = im(end:-1:1, :);
0008 fclose(fid);
46
APPENDIX A. CODE LISTINGS
j loadd.m
0001 function im = j loadd(filename)
0002 %im = j loadd(filename)
0003 %Loads a sinogram
0004
0005 fid = fopen(filename,’rb’);
%open file
0006 im = fread(fid,[360,2000],’double’); %276
0007 fclose(fid);
j median.m
0001 function img = j median(img)
0002 %img = j median(img)
0003 %Modified median filter
0004 %In an area 3x3 surrounding of a pixel calculates the mean
0005 %of the pixels in the area and if the middle pixel differs
0006 %by more than 0.01 times square root of mean of pixels,
0007 %replaces it by the median
0008 %Repeats for all pixels
0009
0010
0011 [width, height] = size(img);
0012 wwidth = 3;
0013 wheight = 3;
0014
0015 sur = zeros(wwidth, wheight);
0016
0017 edgex = ceil((wwidth / 2));
0018 edgey = ceil((wheight / 2));
0019
for x = edgex : (width - edgex)
for y = edgey : (height - edgey)
0020
0021
for fx = 1 : wwidth
for fy = 1 : wheight
0022
0023
sur(fx,fy) = img(x + fx - edgex,y + fy - edgey);
0024
end
0025
end
0026
s mean = mean(sur(:));
0027
if abs(img(x,y) - s mean) > 0.01*sqrt(s mean)
0028
img(x,y) = median(sur(:));
end
0029
0030
0031
end
end
j mksin.m
0001 function sin = j mksin(proj count, line)
0002 sin = j_mksin(proj_count, line)
0003 %Returns a sinogram from the projection
47
APPENDIX A. CODE LISTINGS
0004 %Input:
proj count ... number of projections
0005 %
line ... slice number
0006
0007 %Size of the projections
0008 proj height = 276;
0009 proj width = 2000;
0010
0011 %Empty matrix for the sinogram data
0012 sin = zeros(proj count, proj width);
0013
0014 for i = 1 : proj count
0015
filename = [’x’ sprintf(’%2.3i’, i - 1) ’.raw’];
%open file
0016
fid = fopen(filename,’rb’);
0017
proj = fread(fid, [proj height, proj width], ’double’);
0018
fclose(fid);
0019
sin(i, :) = proj(line, :);
0020 end
0021
0022 %Save the sinogram to harddrive
0023 filename = [’sin’ sprintf(’%2.3i’, line) ’.raw’];
0024 fd = fopen(filename, ’w’);
%open file
0025 s = ( fwrite(fd, sin, ’double’) )’;
0026 fclose(fd);
j mul.m
0001 function j mul(type, count)
0002 %j mul(type, count)
0003 %Takes multiple projections at the same settings
0004 %and saves it as files
0005
0006 for i = 1 : count
0007
filename = [type sprintf(’%2.3i’, i) ’.raw’];
0008
command = [’j capture ’ filename ’ bin’];
0009
dos(command);
0010 end
j norm.m
0001 function j norm(proj count)
0002 %j norm(proj count)
0003 %Normalizes the projection images
0004 %Subtracts offset divides by open-beam image
0005 %Performs spot filter on the open-beam image
0006 %Saves the normalize projections on the harddrive
0007
0008 %Load the offset file
0009 offset = j load(’off.raw’);
0010
0011 %Load the open-beam file
48
APPENDIX A. CODE LISTINGS
0012 flat field = j load(’flat.raw’);
0013
0014 %Rotate the offset and open-beam image
0015 offset = offset(end:-1:1, :);
0016 flat field = flat field(end:-1:1, :);
0017
0018 %Shift the values to be positive
0019 of = abs(max(offset(:)));
0020 flat field = flat field - offset + of;
0021 flat field = flat field + 1;
0022
0023 %Spot filter the open-beam file
0024 flat field = j median(flat field);
0025
0026 %Calculate the average of the flat-field image
0027 flt avg = mean(flat field(:));
0028
0029 for i = 1 : proj count + 1
0030
%load the projection
0031
filename = [sprintf(’%2.3i’, i-1) ’.raw’];
0032
tmp = j load(filename);
0033
0034
%normalize
0035
tmp = tmp - offset + of;
0036
tmp = tmp ./ (flat field) * flt avg;
0037
0038
%save the projection
0039
filename = [’x’ sprintf(’%2.3i’, i-1) ’.raw’];
0040
fd = fopen(filename, ’w’);
0041
s = ( fwrite(fd, tmp, ’double’) )’;
%open file
0042
fclose(fd);
0043 end
j rec.m
0001 function im = j rec(sin)
0002 %im = j_rec(sin)
0003 % returns an image which is a tomographic reconstruction based on the
0004 % sinogram data
0005 % Automatically determines the center of the rotation
0006 % Accepts the fan beam projections data
0007
0008 %Resolution of the detector
0009 width = 2000;
0010
0011 %Distance center of rotation to x-ray tube in pixels
0012 %important for FAN2PARA transformation
0013 distance = 14848; %72,5 cm
0014
49
APPENDIX A. CODE LISTINGS
0015 %The sinogram has the exponential dependancy. However, the iradon algorithm
0016 %expects the linear dependancy. We just need to take the logarithm of the sinogram
0017 sin = sin + abs(min(sin(:))) + 1/2^12;
0018 sin = sin./max(sin(:));
0019 sin = log(sin);
0020
0021 %Detect the center of rotation axis
0022 cor = j cor(sin);
0023
0024 %Determine the crop of the sinogram so that the center of the rotation is
0025 %in the center of the image
0026 cr = min( width-cor, cor);
0027 y = cor - cr + 1 : cor + cr;
0028 sin = sin(:, y);
0029
0030 %Transforms the fan beam data into the parallel beam data
0031 [par p loc p angles] = fan2para (sin’, distance, ’FanSensorSpacing’, 1, ...
0032
’ParallelSensorSpacing’,1, ’FanRotationIncrement’, 1, ’FanSensorGeometry’, ’line’, ...
0033
’ParallelCoverage’, ’cycle’, ’FanCoverage’, ’cycle’);
0034
0035 %Filtered back-projection algorithm
0036 im = iradon(par, p angles, ’linear’, ’Hamming’);
0037
0038 %Scale to 0 to 1
0039 im = im + abs(min(im(:)));
0040 im = im./max(im(:));
0041
0042 %Save the reconstruction to the harddrive
0043 filename = [’re ’ datestr(now, ’yyyy mm dd HH MM’) ’.tif’];
0044 imwrite(im, filename, ’tif’);
j show.m
0001 function h = j show(I)
0002 h = j_show(I)
0003 Displays an image
0004
0005 %Brightness adjustment - shifts the black to 0
0006 minx = min(I(:));
0007 I = I - minx;
0008
0009 %Contrast adjustment - white should be 255
0010 maxx = max(I(:));
0011 fact = 255/maxx;
0012 I = I * fact;
0013
0014 h = figure;
0015 image(I);
0016
50
APPENDIX A. CODE LISTINGS
0017 colormap(gray(256));
0018
0019 axis image;
j tomo.m
0001 function [deviations, angles] = j tomo(h)
0002 %[deviations, angles] = j tomo(h)
0003 %Makes a step, acquires an image and save it as a file
0004 %This is repeated 360 times
0005 %Input is the handle to the ActiveX rotation stage control
0006 %Outputs vectors of deviations and angles of rotation stage motion
0007
0008 step count = 360;
0009 step size = 1.0055;
0010 channel = 0;
0011 deviations = zeros(1,360);
0012 angles = zeros(1,360);
0013
0014 for i = 1 : step count + 1
0015
angles(i) = h.GetPosition Position(channel);
0016
filename = [sprintf(’%2.3i’, i-1) ’.raw’];
0017
command = [’j capture ’ filename ’ bin’];
0018
0019
%Acquire the iamge
0020
dos(command);
0021
0022
position = angles(i) + step size;
0023
0024
%Rotate the rotation stage
0025
h.MoveAbsoluteRot(channel, position, 0, 1, true);
0026
deviations(i) = angles(i)-h.GetPosition Position(channel);
0027 end
j capture C code listing
#include <stdio.h>
#include <stdlib.h>
#include "pxd.h"
#include "frame.h"
int main(int argc, char *argv[])
{
//Global variables
PXD pxd;
FRAMELIB frameLib;
long hfg = 0;
//handler of the frame grabber
51
APPENDIX A. CODE LISTINGS
FRAME* pFrame = 0;
FRAME* deint = 0;
FRAME* crop = 0;
CAMERA TYPE* camType;
short* lpFrameBase;
unsigned short int buftmp[1000];
unsigned short int buf[2000];
unsigned short int bufc[552000];
int l;
int bin = 0;
char filename[256];
char camFile[] = "SB4KPX.cam";
int i;
const long len = 65536;
unsigned short nLUT[len];
*argv++;
//Determine the filename
if (argc < 2 ) {
printf("Enter the filename.");
return 0;
} else {
strncpy(filename, *argv, strlen(*argv));
}
if (argc == 3) {
*argv++;
//Output should be raw or png
if (!strcmp(*argv, "bin")) bin = 1;
} else {
strcat(filename, ".png");
}
if (!imagenation OpenLibrary("pxd 32.dll", &pxd, sizeof(pxd) ) ) {
printf("Frame grabber library not loaded");
return 0;
}
if (!imagenation OpenLibrary("frame 32.dll", &frameLib, sizeof(FRAMELIB) ) ) {
printf("Frame library not loaded");
return 0;
}
// request access to frame grabber
if ( !( hfg = pxd.AllocateFG(-1) ) ) {
printf("pxd frame grabber not found");
imagenation CloseLibrary (&frameLib);
imagenation CloseLibrary (&pxd);
return 0;
52
APPENDIX A. CODE LISTINGS
53
}
// initialize camera configuration
if ( !( camType = pxd.LoadConfig(camFile) )) {
printf("Camera configuration not loaded.");
pxd.FreeFG(hfg);
imagenation CloseLibrary (&frameLib);
imagenation CloseLibrary (&pxd);
return 0;
}
pxd.SetCameraConfig(hfg, camType);
pxd.ContinuousStrobes(hfg, 1);
//turn on camera frame sync
//initialize input LUT to shift image data down by two bits
for (i = 0; i < len; i++) nLUT[i] = i >> 2;
pxd.SetInputLUT(hfg, 16, 0, 0, len, nLUT);
//set up image destination buffer
if ( !( pFrame = pxd.AllocateBuffer (pxd.GetWidth(hfg), pxd.GetHeight(hfg), pxd.GetPixelType(hfg) ))) {
printf("Unable to create image buffer");
pxd.FreeFG(hfg);
imagenation CloseLibrary (&frameLib);
imagenation CloseLibrary (&pxd);
return 0;
}
//create pointer to image buffer
//note: the configuration file sets up the image buffer to contain
//
an extra column to the left and right of the actual image
lpFrameBase = (short *) frameLib.FrameBuffer(pFrame);
lpFrameBase++;
// points to first pixel in image
//Start asquisition
printf("Image acquisition started...\n");
pxd.Grab(hfg, pFrame, 0);
printf("Image acquisition finished.\n");
printf(filename);
//**************************************************************************
// Deinterlace
//set up image destination buffer
if ( !( deint = pxd.AllocateBuffer (2048, 2000, pxd.GetPixelType(hfg) ))) {
printf("Unable to create temporary buffer");
return 0;
}
int no;
APPENDIX A. CODE LISTINGS
int m;
i = 0;
for (no = 0; no <= 3; no++) {
for (m = 0; m <= 511; m++)
{
int k;
k = 4095 - no - m*8;
frameLib.GetColumn( pFrame, buftmp , k+1);
//Reverse order
for (l = 0; l < 1000; l++) buf[l] = buftmp[1000-l-1];
k = no + m*8;
frameLib.GetColumn( pFrame, buftmp , k+1);
for (l = 1000; l < 1999; l++) buf[l] = buftmp[l-1000];
frameLib.PutColumn (buf, deint, i+1);
i++;
} // for m
} // for n
//==========================================
//Crop area
//set up image destination buffer
if ( !( crop = pxd.AllocateBuffer (276, 2000, pxd.GetPixelType(hfg) ))) {
printf("Unable to create temporary buffer");
return 0;
}
frameLib.GetRectangle( deint, bufc , 874, 0, 276, 2000 );
frameLib.PutRectangle(bufc, crop, 0, 0, 476, 2000);
//========================================
//save data on disk
if (bin == 1) {
frameLib.WriteBin (crop, filename, 1);
} else {
frameLib.WritePNG (crop, filename, 1);
}
//release frame grabber resources
frameLib.FreeFrame(pFrame);
frameLib.FreeFrame(deint);
pxd.FreeConfig(camType);
pxd.FreeFG(hfg);
imagenation CloseLibrary (&frameLib);
imagenation CloseLibrary (&pxd);
return 0;
}
54