This document documents the internals of the GNU debugger, gdb. It includes description of gdb's key algorithms and operations, as well as the mechanisms that adapt gdb to specific hosts and targets.
Before diving into the internals, you should understand the formal requirements and other expectations for gdb. Although some of these may seem obvious, there have been proposals for gdb that have run counter to these requirements.
First of all, gdb is a debugger. It's not designed to be a front panel for embedded systems. It's not a text editor. It's not a shell. It's not a programming environment.
gdb is an interactive tool. Although a batch mode is available, gdb's primary role is to interact with a human programmer.
gdb should be responsive to the user. A programmer hot on the trail of a nasty bug, and operating under a looming deadline, is going to be very impatient of everything, including the response time to debugger commands.
gdb should be relatively permissive, such as for expressions. While the compiler should be picky (or have the option to be made picky), since source code lives for a long time usually, the programmer doing debugging shouldn't be spending time figuring out to mollify the debugger.
gdb will be called upon to deal with really large programs. Executable sizes of 50 to 100 megabytes occur regularly, and we've heard reports of programs approaching 1 gigabyte in size.
gdb should be able to run everywhere. No other debugger is available for even half as many configurations as gdb supports.
gdb consists of three major subsystems: user interface, symbol handling (the symbol side), and target system handling (the target side).
The user interface consists of several actual interfaces, plus supporting code.
The symbol side consists of object file readers, debugging info interpreters, symbol table management, source language expression parsing, type and value printing.
The target side consists of execution control, stack frame analysis, and physical target manipulation.
The target side/symbol side division is not formal, and there are a number of exceptions. For instance, core file support involves symbolic elements (the basic core file reader is in BFD) and target elements (it supplies the contents of memory and the values of registers). Instead, this division is useful for understanding how the minor subsystems should fit together.
The symbolic side of gdb can be thought of as “everything you can do in gdb without having a live program running”. For instance, you can look at the types of variables, and evaluate many kinds of expressions.
The target side of gdb is the “bits and bytes manipulator”. Although it may make reference to symbolic info here and there, most of the target side will run with only a stripped executable available—or even no executable at all, in remote debugging cases.
Operations such as disassembly, stack frame crawls, and register display, are able to work with no symbolic info at all. In some cases, such as disassembly, gdb will use symbolic info to present addresses relative to symbols rather than as raw numbers, but it will work either way.
Host refers to attributes of the system where gdb runs. Target refers to the system where the program being debugged executes. In most cases they are the same machine, in which case a third type of Native attributes come into play.
Defines and include files needed to build on the host are host support. Examples are tty support, system defined types, host byte order, host float format.
Defines and information needed to handle the target format are target dependent. Examples are the stack frame format, instruction set, breakpoint instruction, registers, and how to set up and tear down the stack to call a function.
Information that is only needed when the host and target are the same,
is native dependent. One example is Unix child process support; if the
host and target are not the same, doing a fork to start the target
process is a bad idea. The various macros needed for finding the
registers in the upage
, running ptrace
, and such are all
in the native-dependent files.
Another example of native-dependent code is support for features that
are really part of the target environment, but which require
#include
files that are only available on the host system. Core
file handling and setjmp
handling are two common cases.
When you want to make gdb work “native” on a particular machine, you have to include all three kinds of information.
gdb uses a number of debugging-specific algorithms. They are often not very complicated, but get lost in the thicket of special cases and real-world issues. This chapter describes the basic algorithms and mentions some of the specific target definitions that they use.
A frame is a construct that gdb uses to keep track of calling and called functions.
FRAME_FP
in the machine description has no meaning to the
machine-independent part of gdb, except that it is used when
setting up a new frame from scratch, as follows:
create_new_frame (read_register (DEPRECATED_FP_REGNUM), read_pc ()));
Other than that, all the meaning imparted to DEPRECATED_FP_REGNUM
is imparted by the machine-dependent code. So,
DEPRECATED_FP_REGNUM
can have any value that is convenient for
the code that creates new frames. (create_new_frame
calls
DEPRECATED_INIT_EXTRA_FRAME_INFO
if it is defined; that is where
you should use the DEPRECATED_FP_REGNUM
value, if your frames are
nonstandard.)
Given a gdb frame, define DEPRECATED_FRAME_CHAIN
to
determine the address of the calling function's frame. This will be
used to create a new gdb frame struct, and then
DEPRECATED_INIT_EXTRA_FRAME_INFO
and
DEPRECATED_INIT_FRAME_PC
will be called for the new frame.
In general, a breakpoint is a user-designated location in the program where the user wants to regain control if program execution ever reaches that location.
There are two main ways to implement breakpoints; either as “hardware” breakpoints or as “software” breakpoints.
Hardware breakpoints are sometimes available as a builtin debugging features with some chips. Typically these work by having dedicated register into which the breakpoint address may be stored. If the PC (shorthand for program counter) ever matches a value in a breakpoint registers, the CPU raises an exception and reports it to gdb.
Another possibility is when an emulator is in use; many emulators include circuitry that watches the address lines coming out from the processor, and force it to stop if the address matches a breakpoint's address.
A third possibility is that the target already has the ability to do breakpoints somehow; for instance, a ROM monitor may do its own software breakpoints. So although these are not literally “hardware breakpoints”, from gdb's point of view they work the same; gdb need not do anything more than set the breakpoint and wait for something to happen.
Since they depend on hardware resources, hardware breakpoints may be limited in number; when the user asks for more, gdb will start trying to set software breakpoints. (On some architectures, notably the 32-bit x86 platforms, gdb cannot always know whether there's enough hardware resources to insert all the hardware breakpoints and watchpoints. On those platforms, gdb prints an error message only when the program being debugged is continued.)
Software breakpoints require gdb to do somewhat more work. The basic theory is that gdb will replace a program instruction with a trap, illegal divide, or some other instruction that will cause an exception, and then when it's encountered, gdb will take the exception and stop the program. When the user says to continue, gdb will restore the original instruction, single-step, re-insert the trap, and continue on.
Since it literally overwrites the program being tested, the program area must be writable, so this technique won't work on programs in ROM. It can also distort the behavior of programs that examine themselves, although such a situation would be highly unusual.
Also, the software breakpoint instruction should be the smallest size of instruction, so it doesn't overwrite an instruction that might be a jump target, and cause disaster when the program jumps into the middle of the breakpoint instruction. (Strictly speaking, the breakpoint must be no larger than the smallest interval between instructions that may be jump targets; perhaps there is an architecture where only even-numbered instructions may jumped to.) Note that it's possible for an instruction set not to have any instructions usable for a software breakpoint, although in practice only the ARC has failed to define such an instruction.
The basic definition of the software breakpoint is the macro
BREAKPOINT
.
Basic breakpoint object handling is in breakpoint.c. However, much of the interesting breakpoint action is in infrun.c.
gdb has support for figuring out that the target is doing a
longjmp
and for stopping at the target of the jump, if we are
stepping. This is done with a few specialized internal breakpoints,
which are visible in the output of the `maint info breakpoint'
command.
To make this work, you need to define a macro called
GET_LONGJMP_TARGET
, which will examine the jmp_buf
structure and extract the longjmp target address. Since jmp_buf
is target specific, you will need to define it in the appropriate
tm-target.h file. Look in tm-sun4os4.h and
sparc-tdep.c for examples of how to do this.
Watchpoints are a special kind of breakpoints (see breakpoints) which break when data is accessed rather than when some instruction is executed. When you have data which changes without your knowing what code does that, watchpoints are the silver bullet to hunt down and kill such bugs.
Watchpoints can be either hardware-assisted or not; the latter type is known as “software watchpoints.” gdb always uses hardware-assisted watchpoints if they are available, and falls back on software watchpoints otherwise. Typical situations where gdb will use software watchpoints are:
Software watchpoints are very slow, since gdb needs to single-step the program being debugged and test the value of the watched expression(s) after each instruction. The rest of this section is mostly irrelevant for software watchpoints.
When the inferior stops, gdb tries to establish, among other
possible reasons, whether it stopped due to a watchpoint being hit.
For a data-write watchpoint, it does so by evaluating, for each
watchpoint, the expression whose value is being watched, and testing
whether the watched value has changed. For data-read and data-access
watchpoints, gdb needs the target to supply a primitive that
returns the address of the data that was accessed or read (see the
description of target_stopped_data_address
below): if this
primitive returns a valid address, gdb infers that a
watchpoint triggered if it watches an expression whose evaluation uses
that address.
gdb uses several macros and primitives to support hardware watchpoints:
TARGET_HAS_HARDWARE_WATCHPOINTS
TARGET_CAN_USE_HARDWARE_WATCHPOINT (
type,
count,
other)
TARGET_REGION_OK_FOR_HW_WATCHPOINT (
addr,
len)
TARGET_REGION_SIZE_OK_FOR_HW_WATCHPOINT (
size)
TARGET_REGION_OK_FOR_HW_WATCHPOINT
is not
defined.
target_insert_watchpoint (
addr,
len,
type)
target_remove_watchpoint (
addr,
len,
type)
target_hw_bp_type
,
defined by breakpoint.h as follows:
enum target_hw_bp_type { hw_write = 0, /* Common (write) HW watchpoint */ hw_read = 1, /* Read HW watchpoint */ hw_access = 2, /* Access (read or write) HW watchpoint */ hw_execute = 3 /* Execute HW breakpoint */ };
These two macros should return 0 for success, non-zero for failure.
target_remove_hw_breakpoint (
addr,
shadow)
target_insert_hw_breakpoint (
addr,
shadow)
target_stopped_data_address (
addr_p)
HAVE_STEPPABLE_WATCHPOINT
HAVE_NONSTEPPABLE_WATCHPOINT
HAVE_CONTINUABLE_WATCHPOINT
CANNOT_STEP_HW_WATCHPOINTS
STOPPED_BY_WATCHPOINT (
wait_status)
struct target_waitstatus
, defined by target.h.
Normally, this macro is defined to invoke the function pointed to by
the to_stopped_by_watchpoint
member of the structure (of the
type target_ops
, defined on target.h) that describes the
target-specific operations; to_stopped_by_watchpoint
ignores
the wait_status argument.
gdb does not require the non-zero value returned by
STOPPED_BY_WATCHPOINT
to be 100% correct, so if a target cannot
determine for sure whether the inferior stopped due to a watchpoint,
it could return non-zero “just in case”.
The 32-bit Intel x86 (a.k.a. ia32) processors feature special debug registers designed to facilitate debugging. gdb provides a generic library of functions that x86-based ports can use to implement support for watchpoints and hardware-assisted breakpoints. This subsection documents the x86 watchpoint facilities in gdb.
To use the generic x86 watchpoint support, a port should do the following:
I386_USE_GENERIC_WATCHPOINTS
somewhere in the
target-dependent headers.
I386_USE_GENERIC_WATCHPOINTS
.
NATDEPFILES
(see NATDEPFILES) or
TDEPFILES
(see TDEPFILES).
I386_DR_LOW_*
macros described
below. Typically, each macro should call a target-specific function
which does the real work.
The x86 watchpoint support works by maintaining mirror images of the debug registers. Values are copied between the mirror images and the real debug registers via a set of macros which each target needs to provide:
I386_DR_LOW_SET_CONTROL (
val)
I386_DR_LOW_SET_ADDR (
idx,
addr)
I386_DR_LOW_RESET_ADDR (
idx)
I386_DR_LOW_GET_STATUS
I386_DR_LOW_GET_STATUS
, so as to support per-thread status
register values.
For each one of the 4 debug registers (whose indices are from 0 to 3) that store addresses, a reference count is maintained by gdb, to allow sharing of debug registers by several watchpoints. This allows users to define several watchpoints that watch the same expression, but with different conditions and/or commands, without wasting debug registers which are in short supply. gdb maintains the reference counts internally, targets don't have to do anything to use this feature.
The x86 debug registers can each watch a region that is 1, 2, or 4 bytes long. The ia32 architecture requires that each watched region be appropriately aligned: 2-byte region on 2-byte boundary, 4-byte region on 4-byte boundary. However, the x86 watchpoint support in gdb can watch unaligned regions and regions larger than 4 bytes (up to 16 bytes) by allocating several debug registers to watch a single region. This allocation of several registers per a watched region is also done automatically without target code intervention.
The generic x86 watchpoint support provides the following API for the gdb's application code:
i386_region_ok_for_watchpoint (
addr,
len)
TARGET_REGION_OK_FOR_HW_WATCHPOINT
is set to call
this function. It counts the number of debug registers required to
watch a given region, and returns a non-zero value if that number is
less than 4, the number of debug registers available to x86
processors.
i386_stopped_data_address (
addr_p)
target_stopped_data_address
is set to call this function.
This
function examines the breakpoint condition bits in the DR6 Debug
Status register, as returned by the I386_DR_LOW_GET_STATUS
macro, and returns the address associated with the first bit that is
set in DR6.
i386_stopped_by_watchpoint (void)
STOPPED_BY_WATCHPOINT
is set to call this function. The
argument passed to STOPPED_BY_WATCHPOINT
is ignored. This
function examines the breakpoint condition bits in the DR6 Debug
Status register, as returned by the I386_DR_LOW_GET_STATUS
macro, and returns true if any bit is set. Otherwise, false is
returned.
i386_insert_watchpoint (
addr,
len,
type)
i386_remove_watchpoint (
addr,
len,
type)
target_insert_watchpoint
and target_remove_watchpoint
are set to call these functions. i386_insert_watchpoint
first
looks for a debug register which is already set to watch the same
region for the same access types; if found, it just increments the
reference count of that debug register, thus implementing debug
register sharing between watchpoints. If no such register is found,
the function looks for a vacant debug register, sets its mirrored
value to addr, sets the mirrored value of DR7 Debug Control
register as appropriate for the len and type parameters,
and then passes the new values of the debug register and DR7 to the
inferior by calling I386_DR_LOW_SET_ADDR
and
I386_DR_LOW_SET_CONTROL
. If more than one debug register is
required to cover the given region, the above process is repeated for
each debug register.
i386_remove_watchpoint
does the opposite: it resets the address
in the mirrored value of the debug register and its read/write and
length bits in the mirrored value of DR7, then passes these new
values to the inferior via I386_DR_LOW_RESET_ADDR
and
I386_DR_LOW_SET_CONTROL
. If a register is shared by several
watchpoints, each time a i386_remove_watchpoint
is called, it
decrements the reference count, and only calls
I386_DR_LOW_RESET_ADDR
and I386_DR_LOW_SET_CONTROL
when
the count goes to zero.
i386_insert_hw_breakpoint (
addr,
shadowi386_remove_hw_breakpoint (
addr,
shadow)
target_insert_hw_breakpoint
and
target_remove_hw_breakpoint
are set to call these functions.
These functions work like i386_insert_watchpoint
and
i386_remove_watchpoint
, respectively, except that they set up
the debug registers to watch instruction execution, and each
hardware-assisted breakpoint always requires exactly one debug
register.
i386_stopped_by_hwbp (void)
i386_stopped_data_address
, except that it doesn't record the
address whose watchpoint triggered.
i386_cleanup_dregs (void)
Notes:
enum target_hw_bp_type
doesn't even have an enumeration for I/O
watchpoints, this feature is not yet available to gdb running
on x86.
In order to function properly, several modules need to be notified when some changes occur in the gdb internals. Traditionally, these modules have relied on several paradigms, the most common ones being hooks and gdb-events. Unfortunately, none of these paradigms was versatile enough to become the standard notification mechanism in gdb. The fact that they only supported one “client” was also a strong limitation.
A new paradigm, based on the Observer pattern of the Design Patterns book, has therefore been implemented. The goal was to provide a new interface overcoming the issues with the notification mechanisms previously available. This new interface needed to be strongly typed, easy to extend, and versatile enough to be used as the standard interface when adding new notifications.
See GDB Observers for a brief description of the observers currently implemented in GDB. The rationale for the current implementation is also briefly discussed.
gdb has several user interfaces. Although the command-line interface is the most common and most familiar, there are others.
The command interpreter in gdb is fairly simple. It is designed to allow for the set of commands to be augmented dynamically, and also has a recursive subcommand capability, where the first argument to a command may itself direct a lookup on a different command list.
For instance, the `set' command just starts a lookup on the
setlist
command list, while `set thread' recurses
to the set_thread_cmd_list
.
To add commands in general, use add_cmd
. add_com
adds to
the main command list, and should be used for those commands. The usual
place to add commands is in the _initialize_
xyz routines at
the ends of most source files.
To add paired `set' and `show' commands, use
add_setshow_cmd
or add_setshow_cmd_full
. The former is
a slightly simpler interface which is useful when you don't need to
further modify the new command structures, while the latter returns
the new command structures for manipulation.
Before removing commands from the command set it is a good idea to
deprecate them for some time. Use deprecate_cmd
on commands or
aliases to set the deprecated flag. deprecate_cmd
takes a
struct cmd_list_element
as it's first argument. You can use the
return value from add_com
or add_cmd
to deprecate the
command immediately after it is created.
The first time a command is used the user will be warned and offered a
replacement (if one exists). Note that the replacement string passed to
deprecate_cmd
should be the full name of the command, i.e. the
entire string the user should type at the command line.
ui_out
FunctionsThe ui_out
functions present an abstraction level for the
gdb output code. They hide the specifics of different user
interfaces supported by gdb, and thus free the programmer
from the need to write several versions of the same code, one each for
every UI, to produce output.
In general, execution of each gdb command produces some sort of output, and can even generate an input request.
Output can be generated for the following purposes:
This section mainly concentrates on how to build result output, although some of it also applies to other kinds of output.
Generation of output that displays the results of an operation involves one or more of the following:
The ui_out
routines take care of the first three aspects.
Annotations are provided by separate annotation routines. Note that use
of annotations for an interface between a GUI and gdb is
deprecated.
Output can be in the form of a single item, which we call a field; a list consisting of identical fields; a tuple consisting of non-identical fields; or a table, which is a tuple consisting of a header and a body. In a BNF-like form:
<table> ==>
<header> <body>
<header> ==>
{ <column> }
<column> ==>
<width> <alignment> <title>
<body> ==>
{<row>}
Most ui_out
routines are of type void
, the exceptions are
ui_out_stream_new
(which returns a pointer to the newly created
object) and the make_cleanup
routines.
The first parameter is always the ui_out
vector object, a pointer
to a struct ui_out
.
The format parameter is like in printf
family of functions.
When it is present, there must also be a variable list of arguments
sufficient used to satisfy the %
specifiers in the supplied
format.
When a character string argument is not used in a ui_out
function
call, a NULL
pointer has to be supplied instead.
This section introduces ui_out
routines for building lists,
tuples and tables. The routines to output the actual data items
(fields) are presented in the next section.
To recap: A tuple is a sequence of fields, each field containing information about an object; a list is a sequence of fields where each field describes an identical object.
Use the table functions when your output consists of a list of rows (tuples) and the console output should include a heading. Use this even when you are listing just one object but you still want the header.
Tables can not be nested. Tuples and lists can be nested up to a maximum of five levels.
The overall structure of the table output code is something like this:
ui_out_table_begin ui_out_table_header ... ui_out_table_body ui_out_tuple_begin ui_out_field_* ... ui_out_tuple_end ... ui_out_table_end
Here is the description of table-, tuple- and list-related ui_out
functions:
The function
ui_out_table_begin
marks the beginning of the output of a table. It should always be called before any otherui_out
function for a given table. nbrofcols is the number of columns in the table. nr_rows is the number of rows in the table. tblid is an optional string identifying the table. The string pointed to by tblid is copied by the implementation ofui_out_table_begin
, so the application can free the string if it wasmalloc
ed.The companion function
ui_out_table_end
, described below, marks the end of the table's output.
ui_out_table_header
provides the header information for a single table column. You call this function several times, one each for every column of the table, afterui_out_table_begin
, but beforeui_out_table_body
.The value of width gives the column width in characters. The value of alignment is one of
left
,center
, andright
, and it specifies how to align the header: left-justify, center, or right-justify it. colhdr points to a string that specifies the column header; the implementation copies that string, so column header strings inmalloc
ed storage can be freed after the call.
This function delimits the table header from the table body.
This function signals the end of a table's output. It should be called after the table body has been produced by the list and field output functions.
There should be exactly one call to
ui_out_table_end
for each call toui_out_table_begin
, otherwise theui_out
functions will signal an internal error.
The output of the tuples that represent the table rows must follow the
call to ui_out_table_body
and precede the call to
ui_out_table_end
. You build a tuple by calling
ui_out_tuple_begin
and ui_out_tuple_end
, with suitable
calls to functions which actually output fields between them.
This function marks the beginning of a tuple output. id points to an optional string that identifies the tuple; it is copied by the implementation, and so strings in
malloc
ed storage can be freed after the call.
This function signals an end of a tuple output. There should be exactly one call to
ui_out_tuple_end
for each call toui_out_tuple_begin
, otherwise an internal gdb error will be signaled.
This function first opens the tuple and then establishes a cleanup (see Cleanups) to close the tuple. It provides a convenient and correct implementation of the non-portable1 code sequence:
struct cleanup *old_cleanup; ui_out_tuple_begin (uiout, "..."); old_cleanup = make_cleanup ((void(*)(void *)) ui_out_tuple_end, uiout);
This function marks the beginning of a list output. id points to an optional string that identifies the list; it is copied by the implementation, and so strings in
malloc
ed storage can be freed after the call.
This function signals an end of a list output. There should be exactly one call to
ui_out_list_end
for each call toui_out_list_begin
, otherwise an internal gdb error will be signaled.
Similar to
make_cleanup_ui_out_tuple_begin_end
, this function opens a list and then establishes cleanup (see Cleanups) that will close the list.list.
The functions described below produce output for the actual data items, or fields, which contain information about the object.
Choose the appropriate function accordingly to your particular needs.
This is the most general output function. It produces the representation of the data in the variable-length argument list according to formatting specifications in format, a
printf
-like format string. The optional argument fldname supplies the name of the field. The data items themselves are supplied as additional arguments after format.This generic function should be used only when it is not possible to use one of the specialized versions (see below).
This function outputs a value of an
int
variable. It uses the"%d"
output conversion specification. fldname specifies the name of the field.
This function outputs a value of an
int
variable. It differs fromui_out_field_int
in that the caller specifies the desired width and alignment of the output. fldname specifies the name of the field.
This function outputs an address.
This function outputs a string using the
"%s"
conversion specification.
Sometimes, there's a need to compose your output piece by piece using
functions that operate on a stream, such as value_print
or
fprintf_symbol_filtered
. These functions accept an argument of
the type struct ui_file *
, a pointer to a ui_file
object
used to store the data stream used for the output. When you use one
of these functions, you need a way to pass their results stored in a
ui_file
object to the ui_out
functions. To this end,
you first create a ui_stream
object by calling
ui_out_stream_new
, pass the stream
member of that
ui_stream
object to value_print
and similar functions,
and finally call ui_out_field_stream
to output the field you
constructed. When the ui_stream
object is no longer needed,
you should destroy it and free its memory by calling
ui_out_stream_delete
.
This function creates a new
ui_stream
object which uses the same output methods as theui_out
object whose pointer is passed in uiout. It returns a pointer to the newly createdui_stream
object.
This functions destroys a
ui_stream
object specified by streambuf.
This function consumes all the data accumulated in
streambuf->stream
and outputs it likeui_out_field_string
does. After a call toui_out_field_stream
, the accumulated data no longer exists, but the stream is still valid and may be used for producing more fields.
Important: If there is any chance that your code could bail
out before completing output generation and reaching the point where
ui_out_stream_delete
is called, it is necessary to set up a
cleanup, to avoid leaking memory and other resources. Here's a
skeleton code to do that:
struct ui_stream *mybuf = ui_out_stream_new (uiout); struct cleanup *old = make_cleanup (ui_out_stream_delete, mybuf); ... do_cleanups (old);
If the function already has the old cleanup chain set (for other kinds of cleanups), you just have to add your cleanup to it:
mybuf = ui_out_stream_new (uiout); make_cleanup (ui_out_stream_delete, mybuf);
Note that with cleanups in place, you should not call
ui_out_stream_delete
directly, or you would attempt to free the
same buffer twice.
This function skips a field in a table. Use it if you have to leave an empty field without disrupting the table alignment. The argument fldname specifies a name for the (missing) filed.
This function outputs the text in string in a way that makes it easy to be read by humans. For example, the console implementation of this method filters the text through a built-in pager, to prevent it from scrolling off the visible portion of the screen.
Use this function for printing relatively long chunks of text around the actual field data: the text it produces is not aligned according to the table's format. Use
ui_out_field_string
to output a string field, and useui_out_message
, described below, to output short messages.
This function outputs nspaces spaces. It is handy to align the text produced by
ui_out_text
with the rest of the table or list.
This function produces a formatted message, provided that the current verbosity level is at least as large as given by verbosity. The current verbosity level is specified by the user with the `set verbositylevel' command.2
This function gives the console output filter (a paging filter) a hint of where to break lines which are too long. Ignored for all other output consumers. indent, if non-
NULL
, is the string to be printed to indent the wrapped text on the next line; it must remain accessible until the next call toui_out_wrap_hint
, or until an explicit newline is produced by one of the other functions. If indent isNULL
, the wrapped text will not be indented.
This function flushes whatever output has been accumulated so far, if the UI buffers output.
ui_out
functionsThis section gives some practical examples of using the ui_out
functions to generalize the old console-oriented code in
gdb. The examples all come from functions defined on the
breakpoints.c file.
This example, from the breakpoint_1
function, shows how to
produce a table.
The original code was:
if (!found_a_breakpoint++) { annotate_breakpoints_headers (); annotate_field (0); printf_filtered ("Num "); annotate_field (1); printf_filtered ("Type "); annotate_field (2); printf_filtered ("Disp "); annotate_field (3); printf_filtered ("Enb "); if (addressprint) { annotate_field (4); printf_filtered ("Address "); } annotate_field (5); printf_filtered ("What\n"); annotate_breakpoints_table (); }
Here's the new version:
nr_printable_breakpoints = ...; if (addressprint) ui_out_table_begin (ui, 6, nr_printable_breakpoints, "BreakpointTable"); else ui_out_table_begin (ui, 5, nr_printable_breakpoints, "BreakpointTable"); if (nr_printable_breakpoints > 0) annotate_breakpoints_headers (); if (nr_printable_breakpoints > 0) annotate_field (0); ui_out_table_header (uiout, 3, ui_left, "number", "Num"); /* 1 */ if (nr_printable_breakpoints > 0) annotate_field (1); ui_out_table_header (uiout, 14, ui_left, "type", "Type"); /* 2 */ if (nr_printable_breakpoints > 0) annotate_field (2); ui_out_table_header (uiout, 4, ui_left, "disp", "Disp"); /* 3 */ if (nr_printable_breakpoints > 0) annotate_field (3); ui_out_table_header (uiout, 3, ui_left, "enabled", "Enb"); /* 4 */ if (addressprint) { if (nr_printable_breakpoints > 0) annotate_field (4); if (TARGET_ADDR_BIT <= 32) ui_out_table_header (uiout, 10, ui_left, "addr", "Address");/* 5 */ else ui_out_table_header (uiout, 18, ui_left, "addr", "Address");/* 5 */ } if (nr_printable_breakpoints > 0) annotate_field (5); ui_out_table_header (uiout, 40, ui_noalign, "what", "What"); /* 6 */ ui_out_table_body (uiout); if (nr_printable_breakpoints > 0) annotate_breakpoints_table ();
This example, from the print_one_breakpoint
function, shows how
to produce the actual data for the table whose structure was defined
in the above example. The original code was:
annotate_record (); annotate_field (0); printf_filtered ("%-3d ", b->number); annotate_field (1); if ((int)b->type > (sizeof(bptypes)/sizeof(bptypes[0])) || ((int) b->type != bptypes[(int) b->type].type)) internal_error ("bptypes table does not describe type #%d.", (int)b->type); printf_filtered ("%-14s ", bptypes[(int)b->type].description); annotate_field (2); printf_filtered ("%-4s ", bpdisps[(int)b->disposition]); annotate_field (3); printf_filtered ("%-3c ", bpenables[(int)b->enable]); ...
This is the new version:
annotate_record (); ui_out_tuple_begin (uiout, "bkpt"); annotate_field (0); ui_out_field_int (uiout, "number", b->number); annotate_field (1); if (((int) b->type > (sizeof (bptypes) / sizeof (bptypes[0]))) || ((int) b->type != bptypes[(int) b->type].type)) internal_error ("bptypes table does not describe type #%d.", (int) b->type); ui_out_field_string (uiout, "type", bptypes[(int)b->type].description); annotate_field (2); ui_out_field_string (uiout, "disp", bpdisps[(int)b->disposition]); annotate_field (3); ui_out_field_fmt (uiout, "enabled", "%c", bpenables[(int)b->enable]); ...
This example, also from print_one_breakpoint
, shows how to
produce a complicated output field using the print_expression
functions which requires a stream to be passed. It also shows how to
automate stream destruction with cleanups. The original code was:
annotate_field (5); print_expression (b->exp, gdb_stdout);
The new version is:
struct ui_stream *stb = ui_out_stream_new (uiout); struct cleanup *old_chain = make_cleanup_ui_out_stream_delete (stb); ... annotate_field (5); print_expression (b->exp, stb->stream); ui_out_field_stream (uiout, "what", local_stream);
This example, also from print_one_breakpoint
, shows how to use
ui_out_text
and ui_out_field_string
. The original code
was:
annotate_field (5); if (b->dll_pathname == NULL) printf_filtered ("<any library> "); else printf_filtered ("library \"%s\" ", b->dll_pathname);
It became:
annotate_field (5); if (b->dll_pathname == NULL) { ui_out_field_string (uiout, "what", "<any library>"); ui_out_spaces (uiout, 1); } else { ui_out_text (uiout, "library \""); ui_out_field_string (uiout, "what", b->dll_pathname); ui_out_text (uiout, "\" "); }
The following example from print_one_breakpoint
shows how to
use ui_out_field_int
and ui_out_spaces
. The original
code was:
annotate_field (5); if (b->forked_inferior_pid != 0) printf_filtered ("process %d ", b->forked_inferior_pid);
It became:
annotate_field (5); if (b->forked_inferior_pid != 0) { ui_out_text (uiout, "process "); ui_out_field_int (uiout, "what", b->forked_inferior_pid); ui_out_spaces (uiout, 1); }
Here's an example of using ui_out_field_string
. The original
code was:
annotate_field (5); if (b->exec_pathname != NULL) printf_filtered ("program \"%s\" ", b->exec_pathname);
It became:
annotate_field (5); if (b->exec_pathname != NULL) { ui_out_text (uiout, "program \""); ui_out_field_string (uiout, "what", b->exec_pathname); ui_out_text (uiout, "\" "); }
Finally, here's an example of printing an address. The original code:
annotate_field (4); printf_filtered ("%s ", hex_string_custom ((unsigned long) b->address, 8));
It became:
annotate_field (4); ui_out_field_core_addr (uiout, "Address", b->address);
libgdb
1.0 was an abortive project of years ago. The theory was
to provide an API to gdb's functionality.
libgdb
2.0 is an ongoing effort to update gdb so that is
better able to support graphical and other environments.
Since libgdb
development is on-going, its architecture is still
evolving. The following components have so far been identified:
The model that ties these components together is described below.
libgdb
ModelA client of libgdb
interacts with the library in two ways.
libgdb
of any internal state changes (break point changes, run
state, etc).
libgdb
(using the ui-out builder) to
obtain various status values from gdb.
Since libgdb
could have multiple clients (e.g. a GUI supporting
the existing gdb CLI), those clients must co-operate when
controlling libgdb
. In particular, a client must ensure that
libgdb
is idle (i.e. no other client is using libgdb
)
before responding to a gdb-event by making a query.
At present gdb's CLI is very much entangled in with the core of
libgdb
. Consequently, a client wishing to include the CLI in
their interface needs to carefully co-ordinate its own and the CLI's
requirements.
It is suggested that the client set libgdb
up to be bi-modal
(alternate between CLI and client query modes). The notes below sketch
out the theory:
libgdb
.
cli-out
builder using its own
versions of the ui-file
gdb_stderr
, gdb_stdtarg
and
gdb_stdout
streams.
ui-out
builder that is only
used while making direct queries to libgdb
.
When the client receives input intended for the CLI, it simply passes it
along. Since the cli-out
builder is installed by default, all
the CLI output in response to that command is routed (pronounced rooted)
through to the client controlled gdb_stdout
et. al. streams.
At the same time, the client is kept abreast of internal changes by
virtue of being a libgdb
observer.
The only restriction on the client is that it must wait until
libgdb
becomes idle before initiating any queries (using the
client's custom builder).
libgdb
componentsgdb-events provides the client with a very raw mechanism that can
be used to implement an observer. At present it only allows for one
observer and that observer must, internally, handle the need to delay
the processing of any event notifications until after libgdb
has
finished the current command.
ui-out provides the infrastructure necessary for a client to
create a builder. That builder is then passed down to libgdb
when doing any queries.
event-loop, currently non-re-entrant, provides a simple event loop. A client would need to either plug its self into this loop or, implement a new event-loop that GDB would use.
The event-loop will eventually be made re-entrant. This is so that gdb can better handle the problem of some commands blocking instead of returning.
libgdb is the most obvious component of this system. It provides
the query interface. Each function is parameterized by a ui-out
builder. The result of the query is constructed using that builder
before the query function returns.
Symbols are a key part of gdb's operation. Symbols include variables, functions, and types.
gdb reads symbols from symbol files. The usual symbol file is the file containing the program which gdb is debugging. gdb can be directed to use a different file for symbols (with the `symbol-file' command), and it can also read more symbols via the `add-file' and `load' commands, or while reading symbols from shared libraries.
Symbol files are initially opened by code in symfile.c using
the BFD library (see Support Libraries). BFD identifies the type
of the file by examining its header. find_sym_fns
then uses
this identification to locate a set of symbol-reading functions.
Symbol-reading modules identify themselves to gdb by calling
add_symtab_fns
during their module initialization. The argument
to add_symtab_fns
is a struct sym_fns
which contains the
name (or name prefix) of the symbol format, the length of the prefix,
and pointers to four functions. These functions are called at various
times to process symbol files whose identification matches the specified
prefix.
The functions supplied by each module are:
_symfile_init(struct sym_fns *sf)
symbol_file_add
when we are about to read a new
symbol file. This function should clean up any internal state (possibly
resulting from half-read previous files, for example) and prepare to
read a new symbol file. Note that the symbol file which we are reading
might be a new “main” symbol file, or might be a secondary symbol file
whose symbols are being added to the existing symbol table.
The argument to xyz_symfile_init
is a newly allocated
struct sym_fns
whose bfd
field contains the BFD for the
new symbol file being read. Its private
field has been zeroed,
and can be modified as desired. Typically, a struct of private
information will be malloc
'd, and a pointer to it will be placed
in the private
field.
There is no result from xyz_symfile_init
, but it can call
error
if it detects an unavoidable problem.
_new_init()
symbol_file_add
when discarding existing symbols.
This function needs only handle the symbol-reading module's internal
state; the symbol table data structures visible to the rest of
gdb will be discarded by symbol_file_add
. It has no
arguments and no result. It may be called after
xyz_symfile_init
, if a new symbol table is being read, or
may be called alone if all symbols are simply being discarded.
_symfile_read(struct sym_fns *sf, CORE_ADDR addr, int mainline)
symbol_file_add
to actually read the symbols from a
symbol-file into a set of psymtabs or symtabs.
sf
points to the struct sym_fns
originally passed to
xyz_sym_init
for possible initialization. addr
is
the offset between the file's specified start address and its true
address in memory. mainline
is 1 if this is the main symbol
table being read, and 0 if a secondary symbol file (e.g. shared library
or dynamically loaded file) is being read.
In addition, if a symbol-reading module creates psymtabs when
xyz_symfile_read is called, these psymtabs will contain a pointer
to a function xyz_psymtab_to_symtab
, which can be called
from any point in the gdb symbol-handling code.
_psymtab_to_symtab (struct partial_symtab *pst)
psymtab_to_symtab
(or the PSYMTAB_TO_SYMTAB
macro) if
the psymtab has not already been read in and had its pst->symtab
pointer set. The argument is the psymtab to be fleshed-out into a
symtab. Upon return, pst->readin
should have been set to 1, and
pst->symtab
should contain a pointer to the new corresponding symtab, or
zero if there were no symbols in that part of the symbol file.
gdb has three types of symbol tables:
This section describes partial symbol tables.
A psymtab is constructed by doing a very quick pass over an executable file's debugging information. Small amounts of information are extracted—enough to identify which parts of the symbol table will need to be re-read and fully digested later, when the user needs the information. The speed of this pass causes gdb to start up very quickly. Later, as the detailed rereading occurs, it occurs in small pieces, at various times, and the delay therefrom is mostly invisible to the user.
The symbols that show up in a file's psymtab should be, roughly, those
visible to the debugger's user when the program is not running code from
that file. These include external symbols and types, static symbols and
types, and enum
values declared at file scope.
The psymtab also contains the range of instruction addresses that the full symbol table would represent.
The idea is that there are only two ways for the user (or much of the code in the debugger) to reference a symbol:
find_pc_function
, find_pc_line
, and other
find_pc_...
functions handle this.
lookup_symbol
does most of the work here.
The only reason that psymtabs exist is to cause a symtab to be read in at the right moment. Any symbol that can be elided from a psymtab, while still causing that to happen, should not appear in it. Since psymtabs don't have the idea of scope, you can't put local symbols in them anyway. Psymtabs don't have the idea of the type of a symbol, either, so types need not appear, unless they will be referenced by name.
It is a bug for gdb to behave one way when only a psymtab has been read, and another way if the corresponding symtab has been read in. Such bugs are typically caused by a psymtab that does not contain all the visible symbols, or which has the wrong instruction address ranges.
The psymtab for a particular section of a symbol file (objfile) could be thrown away after the symtab has been read in. The symtab should always be searched before the psymtab, so the psymtab will never be used (in a bug-free environment). Currently, psymtabs are allocated on an obstack, and all the psymbols themselves are allocated in a pair of large arrays on an obstack, so there is little to be gained by trying to free them unless you want to do a lot more work.
FT_VOID
, FT_BOOLEAN
).These are the fundamental types that gdb uses internally. Fundamental types from the various debugging formats (stabs, ELF, etc) are mapped into one of these. They are basically a union of all fundamental types that gdb knows about for all the languages that gdb knows about.
TYPE_CODE_PTR
, TYPE_CODE_ARRAY
).Each time gdb builds an internal type, it marks it with one
of these types. The type may be a fundamental type, such as
TYPE_CODE_INT
, or a derived type, such as TYPE_CODE_PTR
which is a pointer to another type. Typically, several FT_*
types map to one TYPE_CODE_*
type, and are distinguished by
other members of the type struct, such as whether the type is signed
or unsigned, and how many bits it uses.
builtin_type_void
, builtin_type_char
).These are instances of type structs that roughly correspond to
fundamental types and are created as global types for gdb to
use for various ugly historical reasons. We eventually want to
eliminate these. Note for example that builtin_type_int
initialized in gdbtypes.c is basically the same as a
TYPE_CODE_INT
type that is initialized in c-lang.c for
an FT_INTEGER
fundamental type. The difference is that the
builtin_type
is not associated with any particular objfile, and
only one instance exists, while c-lang.c builds as many
TYPE_CODE_INT
types as needed, with each one associated with
some particular objfile.
The a.out
format is the original file format for Unix. It
consists of three sections: text
, data
, and bss
,
which are for program code, initialized data, and uninitialized data,
respectively.
The a.out
format is so simple that it doesn't have any reserved
place for debugging information. (Hey, the original Unix hackers used
`adb', which is a machine-language debugger!) The only debugging
format for a.out
is stabs, which is encoded as a set of normal
symbols with distinctive attributes.
The basic a.out
reader is in dbxread.c.
The COFF format was introduced with System V Release 3 (SVR3) Unix. COFF files may have multiple sections, each prefixed by a header. The number of sections is limited.
The COFF specification includes support for debugging. Although this was a step forward, the debugging information was woefully limited. For instance, it was not possible to represent code that came from an included file.
The COFF reader is in coffread.c.
ECOFF is an extended COFF originally introduced for Mips and Alpha workstations.
The basic ECOFF reader is in mipsread.c.
The IBM RS/6000 running AIX uses an object file format called XCOFF.
The COFF sections, symbols, and line numbers are used, but debugging
symbols are dbx
-style stabs whose strings are located in the
.debug
section (rather than the string table). For more
information, see Top.
The shared library scheme has a clean interface for figuring out what shared libraries are in use, but the catch is that everything which refers to addresses (symbol tables and breakpoints at least) needs to be relocated for both shared libraries and the main executable. At least using the standard mechanism this can only be done once the program has been run (or the core file has been read).
Windows 95 and NT use the PE (Portable Executable) format for their executables. PE is basically COFF with additional headers.
While BFD includes special PE support, gdb needs only the basic COFF reader.
The ELF format came with System V Release 4 (SVR4) Unix. ELF is similar to COFF in being organized into a number of sections, but it removes many of COFF's limitations.
The basic ELF reader is in elfread.c.
SOM is HP's object file and debug format (not to be confused with IBM's SOM, which is a cross-language ABI).
The SOM reader is in hpread.c.
Other file formats that have been supported by gdb include Netware Loadable Modules (nlmread.c).
This section describes characteristics of debugging information that are independent of the object file format.
stabs
started out as special symbols within the a.out
format. Since then, it has been encapsulated into other file
formats, such as COFF and ELF.
While dbxread.c does some of the basic stab processing, including for encapsulated versions, stabsread.c does the real work.
The basic COFF definition includes debugging information. The level of support is minimal and non-extensible, and is not often used.
ECOFF includes a definition of a special debug format.
The file mdebugread.c implements reading for this format.
DWARF 1 is a debugging format that was originally designed to be used with ELF in SVR4 systems.
The DWARF 1 reader is in dwarfread.c.
DWARF 2 is an improved but incompatible version of DWARF 1.
The DWARF 2 reader is in dwarf2read.c.
Like COFF, the SOM definition includes debugging information.
If you are using an existing object file format (a.out
, COFF, ELF, etc),
there is probably little to be done.
If you need to add a new object file format, you must first add it to BFD. This is beyond the scope of this document.
You must then arrange for the BFD code to provide access to the debugging symbols. Generally gdb will have to call swapping routines from BFD and a few other BFD internal routines to locate the debugging information. As much as possible, gdb should not depend on the BFD internal data structures.
For some targets (e.g., COFF), there is a special transfer vector used to call swapping routines, since the external data structures on various platforms have different sizes and layouts. Specialized routines that will only ever be implemented by one object file format may be called directly. This interface should be described in a file bfd/libxyz.h, which is included by gdb.
gdb's language support is mainly driven by the symbol reader, although it is possible for the user to set the source language manually.
gdb chooses the source language by looking at the extension of the file recorded in the debug info; .c means C, .f means Fortran, etc. It may also use a special-purpose language identifier if the debug format supports it, like with DWARF.
To add other languages to gdb's expression parser, follow the following steps:
union exp_element
list are in
parse.c.
Since we can't depend upon everyone having Bison, and YACC produces parsers that define a bunch of global names, the following lines must be included at the top of the YACC parser, to prevent the various parsers from defining the same global names:
#define yyparse lang_parse #define yylex lang_lex #define yyerror lang_error #define yylval lang_lval #define yychar lang_char #define yydebug lang_debug #define yypact lang_pact #define yyr1 lang_r1 #define yyr2 lang_r2 #define yydef lang_def #define yychk lang_chk #define yypgo lang_pgo #define yyact lang_act #define yyexca lang_exca #define yyerrflag lang_errflag #define yynerrs lang_nerrs
At the bottom of your parser, define a struct language_defn
and
initialize it with the right values for your language. Define an
initialize_
lang routine and have it call
`add_language(lang_language_defn)' to tell the rest of gdb
that your language exists. You'll need some other supporting variables
and functions, which will be used via pointers from your
lang_language_defn
. See the declaration of struct
language_defn
in language.h, and the other *-exp.y files,
for more information.
evaluate_subexp
function
defined in the file eval.c. Add cases
for new opcodes in two functions from parse.c:
prefixify_subexp
and length_of_subexp
. These compute
the number of exp_element
s that a given operation takes up.
enum language
in defs.h.
Update the routines in language.c so your language is included. These routines include type predicates and such, which (in some cases) are language dependent. If your language does not appear in the switch statement, an error is reported.
Also included in language.c is the code that updates the variable
current_language
, and the routines that translate the
language_
lang enumerated identifier into a printable
string.
Update the function _initialize_language
to include your
language. This function picks the default language upon startup, so is
dependent upon which languages that gdb is built for.
Update allocate_symtab
in symfile.c and/or symbol-reading
code so that the language of each symtab (source file) is set properly.
This is used to determine the language to use at each stack frame level.
Currently, the language is set based upon the extension of the source
file. If the language can be better inferred from the symbol
information, please set the language of the symtab in the symbol-reading
code.
Add helper code to print_subexp
(in expprint.c) to handle any new
expression opcodes you have added to expression.h. Also, add the
printed representations of your operators to op_print_tab
.
_parse()
and lang_error
in
parse_exp_1
(defined in parse.c).
_LANG_
lang defined in it. Use #ifdef
s to
leave out large routines that the user won't need if he or she is not
using your language.
Note that you do not need to do this in your YACC parser, since if gdb is not build for lang, then lang-exp.tab.o (the compiled form of your parser) is not linked into gdb at all.
See the file configure.in for how gdb is configured
for different languages.
HFILES
and OBJS
, otherwise your code may
not get linked in, or, worse yet, it may not get tar
red into the
distribution!
With the advent of Autoconf, it's rarely necessary to have host definition machinery anymore. The following information is provided, mainly, as an historical reference.
gdb's host configuration support normally happens via Autoconf. New host-specific definitions should not be needed. Older hosts gdb still use the host-specific definitions and files listed below, but these mostly exist for historical reasons, and will eventually disappear.
Host configuration information included a definition of
XM_FILE=xm-
xyz.h
and possibly definitions for CC
,
SYSV_DEFINE
, XM_CFLAGS
, XM_ADD_FILES
,
XM_CLIBS
, XM_CDEPS
, etc.; see Makefile.in.
New host only configurations do not need this file.
New host and native configurations do not need this file.
Maintainer's note: Some hosts continue to use the xm-xyz.h file to define the macros HOST_FLOAT_FORMAT, HOST_DOUBLE_FORMAT and HOST_LONG_DOUBLE_FORMAT. That code also needs to be replaced with either an Autoconf or run-time test.
There are some “generic” versions of routines that can be used by
various systems. These can be customized in various ways by macros
defined in your xm-xyz.h file. If these routines work for
the xyz host, you can just include the generic file's name (with
`.o', not `.c') in XDEPFILES
.
Otherwise, if your machine needs custom support routines, you will need
to write routines that perform the same functions as the generic file.
Put them into xyz-xdep.c
, and put xyz-xdep.o
into XDEPFILES
.
SER_HARDWIRE
; override this
variable in the .mh file to avoid it.
When gdb is configured and compiled, various macros are defined or left undefined, to control compilation based on the attributes of the host system. These macros and their meanings (or if the meaning is not documented here, then one of the source files where they are used is indicated) are:
INIT_FILENAME
NO_STD_REGS
SIGWINCH_HANDLER
SIGWINCH
, you can define this to be the name
of a function to be called if SIGWINCH
is received.
SIGWINCH_HANDLER_BODY
SIGWINCH_HANDLER
.
ALIGN_STACK_ON_STARTUP
tgetent
if the stack happens not to be longword-aligned when
main
is called. This is a rare situation, but is known to occur
on several different types of systems.
CRLF_SOURCE_FILES
\r\n
rather than \n
as a
line terminator. This will cause source file listings to omit \r
characters when printing and it will allow \r\n
line endings of files
which are “sourced” by gdb. It must be possible to open files in binary
mode using O_BINARY
or, for fopen, "rb"
.
DEFAULT_PROMPT
"(gdb) "
).
DEV_TTY
"/dev/tty"
.
FOPEN_RB
HAVE_MMAP
mmap
for reading symbol
tables. For some machines this allows for sharing and quick updates.
HAVE_TERMIO
termio.h
.
INT_MAX
INT_MIN
LONG_MAX
UINT_MAX
ULONG_MAX
ISATTY
LONGEST
long long
or long
, depending on
CC_HAS_LONG_LONG
.
CC_HAS_LONG_LONG
long long
. This is set
by the configure
script.
PRINTF_HAS_LONG_LONG
ll
. This is set by the
configure
script.
HAVE_LONG_DOUBLE
long double
. This is
set by the configure
script.
PRINTF_HAS_LONG_DOUBLE
Lg
. This is
set by the configure
script.
SCANF_HAS_LONG_DOUBLE
Lg
. This is set by the configure
script.
LSEEK_NOT_LINEAR
lseek (n)
does not necessarily move to byte number
n
in the file. This is only used when reading source files. It
is normally faster to define CRLF_SOURCE_FILES
when possible.
L_SET
lseek
(or, most commonly,
bfd_seek
). FIXME, should be replaced by SEEK_SET instead,
which is the POSIX equivalent.
NORETURN
volatile
,
that can be used in both the declaration and definition of functions to
indicate that they never return. The default is already set correctly
if compiling with GCC. This will almost never need to be defined.
ATTR_NORETURN
__attribute__ ((noreturn))
, that can be used in the declarations
of functions to indicate that they never return. The default is already
set correctly if compiling with GCC. This will almost never need to be
defined.
SEEK_CUR
SEEK_SET
lseek
, if not already
defined.
STOP_SIGNAL
SIGTSTP
. (Only redefined for the Convex.)
USG
lint
lint
in some situations.
volatile
__volatile__
or
/**/
.
gdb's target architecture defines what sort of machine-language programs gdb can work with, and how it works with them.
The target architecture object is implemented as the C structure
struct gdbarch *
. The structure, and its methods, are generated
using the Bourne shell script gdbarch.sh.
gdb provides a mechanism for handling variations in OS ABIs. An OS ABI variant may have influence over any number of variables in the target architecture definition. There are two major components in the OS ABI mechanism: sniffers and handlers.
A sniffer examines a file matching a BFD architecture/flavour pair
(the architecture may be wildcarded) in an attempt to determine the
OS ABI of that file. Sniffers with a wildcarded architecture are considered
to be generic, while sniffers for a specific architecture are
considered to be specific. A match from a specific sniffer
overrides a match from a generic sniffer. Multiple sniffers for an
architecture/flavour may exist, in order to differentiate between two
different operating systems which use the same basic file format. The
OS ABI framework provides a generic sniffer for ELF-format files which
examines the EI_OSABI
field of the ELF header, as well as note
sections known to be used by several operating systems.
A handler is used to fine-tune the gdbarch
structure for the
selected OS ABI. There may be only one handler for a given OS ABI
for each BFD architecture.
The following OS ABI variants are defined in osabi.h:
GDB_OSABI_UNKNOWN
gdbarch
settings for the architecture will be used.
GDB_OSABI_SVR4
GDB_OSABI_HURD
GDB_OSABI_SOLARIS
GDB_OSABI_OSF1
GDB_OSABI_LINUX
GDB_OSABI_FREEBSD_AOUT
GDB_OSABI_FREEBSD_ELF
GDB_OSABI_NETBSD_AOUT
GDB_OSABI_NETBSD_ELF
GDB_OSABI_WINCE
GDB_OSABI_GO32
GDB_OSABI_NETWARE
GDB_OSABI_ARM_EABI_V1
GDB_OSABI_ARM_EABI_V2
GDB_OSABI_ARM_APCS
Here are the functions that make up the OS ABI framework:
Return the name of the OS ABI corresponding to osabi.
Register the OS ABI handler specified by init_osabi for the architecture, machine type and OS ABI specified by arch, machine and osabi. In most cases, a value of zero for the machine type, which implies the architecture's default machine type, will suffice.
Register the OS ABI file sniffer specified by sniffer for the BFD architecture/flavour pair specified by arch and flavour. If arch is
bfd_arch_unknown
, the sniffer is considered to be generic, and is allowed to examine flavour-flavoured files for any architecture.
Examine the file described by abfd to determine its OS ABI. The value
GDB_OSABI_UNKNOWN
is returned if the OS ABI cannot be determined.
Invoke the OS ABI handler corresponding to osabi to fine-tune the
gdbarch
structure specified by gdbarch. If a handler corresponding to osabi has not been registered for gdbarch's architecture, a warning will be issued and the debugging session will continue with the defaults already established for gdbarch.
gdb's model of the target machine is rather simple. gdb assumes the machine includes a bank of registers and a block of memory. Each register may have a different size.
gdb does not have a magical way to match up with the
compiler's idea of which registers are which; however, it is critical
that they do match up accurately. The only way to make this work is
to get accurate information about the order that the compiler uses,
and to reflect that in the REGISTER_NAME
and related macros.
gdb can handle big-endian, little-endian, and bi-endian architectures.
On almost all 32-bit architectures, the representation of a pointer is indistinguishable from the representation of some fixed-length number whose value is the byte address of the object pointed to. On such machines, the words “pointer” and “address” can be used interchangeably. However, architectures with smaller word sizes are often cramped for address space, so they may choose a pointer representation that breaks this identity, and allows a larger code address space.
For example, the Renesas D10V is a 16-bit VLIW processor whose instructions are 32 bits long3. If the D10V used ordinary byte addresses to refer to code locations, then the processor would only be able to address 64kb of instructions. However, since instructions must be aligned on four-byte boundaries, the low two bits of any valid instruction's byte address are always zero—byte addresses waste two bits. So instead of byte addresses, the D10V uses word addresses—byte addresses shifted right two bits—to refer to code. Thus, the D10V can use 16-bit words to address 256kb of code space.
However, this means that code pointers and data pointers have different
forms on the D10V. The 16-bit word 0xC020
refers to byte address
0xC020
when used as a data address, but refers to byte address
0x30080
when used as a code address.
(The D10V also uses separate code and data address spaces, which also affects the correspondence between pointers and addresses, but we're going to ignore that here; this example is already too long.)
To cope with architectures like this—the D10V is not the only
one!—gdb tries to distinguish between addresses, which are
byte numbers, and pointers, which are the target's representation
of an address of a particular type of data. In the example above,
0xC020
is the pointer, which refers to one of the addresses
0xC020
or 0x30080
, depending on the type imposed upon it.
gdb provides functions for turning a pointer into an address
and vice versa, in the appropriate way for the current architecture.
Unfortunately, since addresses and pointers are identical on almost all processors, this distinction tends to bit-rot pretty quickly. Thus, each time you port gdb to an architecture which does distinguish between pointers and addresses, you'll probably need to clean up some architecture-independent code.
Here are functions which convert between pointers and addresses:
Treat the bytes at buf as a pointer or reference of type type, and return the address it represents, in a manner appropriate for the current architecture. This yields an address gdb can use to read target memory, disassemble, etc. Note that buf refers to a buffer in gdb's memory, not the inferior's.
For example, if the current architecture is the Intel x86, this function extracts a little-endian integer of the appropriate length from buf and returns it. However, if the current architecture is the D10V, this function will return a 16-bit integer extracted from buf, multiplied by four if type is a pointer to a function.
If type is not a pointer or reference type, then this function will signal an internal error.
Store the address addr in buf, in the proper format for a pointer of type type in the current architecture. Note that buf refers to a buffer in gdb's memory, not the inferior's.
For example, if the current architecture is the Intel x86, this function stores addr unmodified as a little-endian integer of the appropriate length in buf. However, if the current architecture is the D10V, this function divides addr by four if type is a pointer to a function, and then stores it in buf.
If type is not a pointer or reference type, then this function will signal an internal error.
Assuming that val is a pointer, return the address it represents, as appropriate for the current architecture.
This function actually works on integral values, as well as pointers. For pointers, it performs architecture-specific conversions as described above for
extract_typed_address
.
Create and return a value representing a pointer of type type to the address addr, as appropriate for the current architecture. This function performs architecture-specific conversions as described above for
store_typed_address
.
Here are some macros which architectures can define to indicate the relationship between pointers and addresses. These have default definitions, appropriate for architectures on which all pointers are simple unsigned byte addresses.
Assume that buf holds a pointer of type type, in the appropriate format for the current architecture. Return the byte address the pointer refers to.
This function may safely assume that type is either a pointer or a C++ reference type.
Store in buf a pointer of type type representing the address addr, in the appropriate format for the current architecture.
This function may safely assume that type is either a pointer or a C++ reference type.
Sometimes information about different kinds of addresses is available
via the debug information. For example, some programming environments
define addresses of several different sizes. If the debug information
distinguishes these kinds of address classes through either the size
info (e.g, DW_AT_byte_size
in DWARF 2) or through an explicit
address class attribute (e.g, DW_AT_address_class
in DWARF 2), the
following macros should be defined in order to disambiguate these
types within gdb as well as provide the added information to
a gdb user when printing type expressions.
Returns the type flags needed to construct a pointer type whose size is byte_size and whose address class is dwarf2_addr_class. This function is normally called from within a symbol reader. See dwarf2read.c.
Given the type flags representing an address class qualifier, return its name.
Given an address qualifier name, set the
int
refererenced by type_flags_ptr to the type flags for that address class qualifier.
Since the need for address classes is rather rare, none of the address class macros defined by default. Predicate macros are provided to detect when they are defined.
Consider a hypothetical architecture in which addresses are normally
32-bits wide, but 16-bit addresses are also supported. Furthermore,
suppose that the DWARF 2 information for this architecture simply
uses a DW_AT_byte_size
value of 2 to indicate the use of one
of these "short" pointers. The following functions could be defined
to implement the address class macros:
somearch_address_class_type_flags (int byte_size, int dwarf2_addr_class) { if (byte_size == 2) return TYPE_FLAG_ADDRESS_CLASS_1; else return 0; } static char * somearch_address_class_type_flags_to_name (int type_flags) { if (type_flags & TYPE_FLAG_ADDRESS_CLASS_1) return "short"; else return NULL; } int somearch_address_class_name_to_type_flags (char *name, int *type_flags_ptr) { if (strcmp (name, "short") == 0) { *type_flags_ptr = TYPE_FLAG_ADDRESS_CLASS_1; return 1; } else return 0; }
The qualifier @short
is used in gdb's type expressions
to indicate the presence of one of these "short" pointers. E.g, if
the debug information indicates that short_ptr_var
is one of these
short pointers, gdb might show the following behavior:
(gdb) ptype short_ptr_var type = int * @short
Maintainer note: This section is pretty much obsolete. The functionality described here has largely been replaced by pseudo-registers and the mechanisms described in Using Different Register and Memory Data Representations. See also Bug Tracking Database and ARI Index for more up-to-date information.
Some architectures use one representation for a value when it lives in a
register, but use a different representation when it lives in memory.
In gdb's terminology, the raw representation is the one used in
the target registers, and the virtual representation is the one
used in memory, and within gdb struct value
objects.
Maintainer note: Notice that the same mechanism is being used to
both convert a register to a struct value
and alternative
register forms.
For almost all data types on almost all architectures, the virtual and raw representations are identical, and no special handling is needed. However, they do occasionally differ. For example:
long double
type. However, when
we store those values in memory, they occupy twelve bytes: the
floating-point number occupies the first ten, and the final two bytes
are unused. This keeps the values aligned on four-byte boundaries,
allowing more efficient access. Thus, the x86 80-bit floating-point
type is the raw representation, and the twelve-byte loosely-packed
arrangement is the virtual representation.
In general, the raw representation is determined by the architecture, or
gdb's interface to the architecture, while the virtual representation
can be chosen for gdb's convenience. gdb's register file,
registers
, holds the register contents in raw format, and the
gdb remote protocol transmits register values in raw format.
Your architecture may define the following macros to request conversions between the raw and virtual format:
Return non-zero if register number reg's value needs different raw and virtual formats.
You should not use
REGISTER_CONVERT_TO_VIRTUAL
for a register unless this macro returns a non-zero value for that register.
The size of register number reg's raw value. This is the number of bytes the register will occupy in
registers
, or in a gdb remote protocol packet.
The size of register number reg's value, in its virtual format. This is the size a
struct value
's buffer will have, holding that register's value.
This is the type of the virtual representation of register number reg. Note that there is no need for a macro giving a type for the register's raw form; once the register's value has been obtained, gdb always uses the virtual form.
Convert the value of register number reg to type, which should always be
DEPRECATED_REGISTER_VIRTUAL_TYPE (
reg)
. The buffer at from holds the register's value in raw format; the macro should convert the value to virtual format, and place it at to.Note that
REGISTER_CONVERT_TO_VIRTUAL
andREGISTER_CONVERT_TO_RAW
take their reg and type arguments in different orders.You should only use
REGISTER_CONVERT_TO_VIRTUAL
with registers for which theREGISTER_CONVERTIBLE
macro returns a non-zero value.
Convert the value of register number reg to type, which should always be
DEPRECATED_REGISTER_VIRTUAL_TYPE (
reg)
. The buffer at from holds the register's value in raw format; the macro should convert the value to virtual format, and place it at to.Note that REGISTER_CONVERT_TO_VIRTUAL and REGISTER_CONVERT_TO_RAW take their reg and type arguments in different orders.
Maintainer's note: The way GDB manipulates registers is undergoing significant change. Many of the macros and functions refered to in this section are likely to be subject to further revision. See A.R. Index and Bug Tracking Database for further information. cagney/2002-05-06.
Some architectures can represent a data object in a register using a form that is different to the objects more normal memory representation. For example:
long double
data type occupies 96 bits in memory but only 80 bits
when stored in a register.
In general, the register representation of a data type is determined by the architecture, or gdb's interface to the architecture, while the memory representation is determined by the Application Binary Interface.
For almost all data types on almost all architectures, the two representations are identical, and no special handling is needed. However, they do occasionally differ. Your architecture may define the following macros to request conversions between the register and memory representations of a data type:
Return non-zero if the representation of a data value stored in this register may be different to the representation of that same data value when stored in memory.
When non-zero, the macros
REGISTER_TO_VALUE
andVALUE_TO_REGISTER
are used to perform any necessary conversion.
Convert the value of register number reg to a data object of type type. The buffer at from holds the register's value in raw format; the converted value should be placed in the buffer at to.
Note that
REGISTER_TO_VALUE
andVALUE_TO_REGISTER
take their reg and type arguments in different orders.You should only use
REGISTER_TO_VALUE
with registers for which theCONVERT_REGISTER_P
macro returns a non-zero value.
Convert a data value of type type to register number reg' raw format.
Note that
REGISTER_TO_VALUE
andVALUE_TO_REGISTER
take their reg and type arguments in different orders.You should only use
VALUE_TO_REGISTER
with registers for which theCONVERT_REGISTER_P
macro returns a non-zero value.
See mips-tdep.c. It does not do what you want.
This section describes the macros that you can use to define the target machine.
ADDR_BITS_REMOVE (addr)
For example, the two low-order bits of the PC on the Hewlett-Packard PA
2.0 architecture contain the privilege level of the corresponding
instruction. Since instructions must always be aligned on four-byte
boundaries, the processor masks out these bits to generate the actual
address of the instruction. ADDR_BITS_REMOVE should filter out these
bits with an expression such as ((addr) & ~3)
.
ADDRESS_CLASS_NAME_TO_TYPE_FLAGS (
name,
type_flags_ptr)
int
referenced by type_flags_ptr to the mask representing the qualifier
and return 1. If name is not a valid address class qualifier name,
return 0.
The value for type_flags_ptr should be one of
TYPE_FLAG_ADDRESS_CLASS_1
, TYPE_FLAG_ADDRESS_CLASS_2
, or
possibly some combination of these values or'd together.
See Address Classes.
ADDRESS_CLASS_NAME_TO_TYPE_FLAGS_P ()
ADDRESS_CLASS_NAME_TO_TYPE_FLAGS
has been defined.
ADDRESS_CLASS_TYPE_FLAGS (
byte_size,
dwarf2_addr_class)
DW_AT_address_class
value, return the type flags
used by gdb to represent this address class. The value
returned should be one of TYPE_FLAG_ADDRESS_CLASS_1
,
TYPE_FLAG_ADDRESS_CLASS_2
, or possibly some combination of these
values or'd together.
See Address Classes.
ADDRESS_CLASS_TYPE_FLAGS_P ()
ADDRESS_CLASS_TYPE_FLAGS
has
been defined.
ADDRESS_CLASS_TYPE_FLAGS_TO_NAME (
type_flags)
ADDRESS_CLASS_TYPE_FLAGS_TO_NAME_P ()
ADDRESS_CLASS_TYPE_FLAGS_TO_NAME
has
been defined.
See Address Classes.
ADDRESS_TO_POINTER (
type,
buf,
addr)
BELIEVE_PCC_PROMOTION
short
or char
parameter to an int
, but still reports the parameter as its
original type, rather than the promoted type.
BITS_BIG_ENDIAN
BREAKPOINT
BREAKPOINT
has been deprecated in favor of
BREAKPOINT_FROM_PC
.
BIG_BREAKPOINT
LITTLE_BREAKPOINT
BIG_BREAKPOINT
and LITTLE_BREAKPOINT
have been deprecated in
favor of BREAKPOINT_FROM_PC
.
DEPRECATED_REMOTE_BREAKPOINT
DEPRECATED_LITTLE_REMOTE_BREAKPOINT
DEPRECATED_BIG_REMOTE_BREAKPOINT
DEPRECATED_REMOTE_BREAKPOINT
,
DEPRECATED_BIG_REMOTE_BREAKPOINT
and
DEPRECATED_LITTLE_REMOTE_BREAKPOINT
have been deprecated in
favor of BREAKPOINT_FROM_PC
(see BREAKPOINT_FROM_PC).
BREAKPOINT_FROM_PC (
pcptr,
lenptr)
*
lenptr, and adjusts the program
counter (if necessary) to point to the actual memory location where the
breakpoint should be inserted.
Although it is common to use a trap instruction for a breakpoint, it's not required; for instance, the bit pattern could be an invalid instruction. The breakpoint must be no longer than the shortest instruction of the architecture.
Replaces all the other BREAKPOINT macros.
MEMORY_INSERT_BREAKPOINT (
addr,
contents_cache)
MEMORY_REMOVE_BREAKPOINT (
addr,
contents_cache)
default_memory_insert_breakpoint
and
default_memory_remove_breakpoint
respectively) have been
provided so that it is not necessary to define these for most
architectures. Architectures which may want to define
MEMORY_INSERT_BREAKPOINT
and MEMORY_REMOVE_BREAKPOINT
will
likely have instructions that are oddly sized or are not stored in a
conventional manner.
It may also be desirable (from an efficiency standpoint) to define
custom breakpoint insertion and removal routines if
BREAKPOINT_FROM_PC
needs to read the target's memory for some
reason.
ADJUST_BREAKPOINT_ADDRESS (
address)
The FR-V target (see frv-tdep.c) requires this method. The FR-V is a VLIW architecture in which a number of RISC-like instructions are grouped (packed) together into an aggregate instruction or instruction bundle. When the processor executes one of these bundles, the component instructions are executed in parallel.
In the course of optimization, the compiler may group instructions from distinct source statements into the same bundle. The line number information associated with one of the latter statements will likely refer to some instruction other than the first one in the bundle. So, if the user attempts to place a breakpoint on one of these latter statements, gdb must be careful to not place the break instruction on any instruction other than the first one in the bundle. (Remember though that the instructions within a bundle execute in parallel, so the first instruction is the instruction at the lowest address and has nothing to do with execution order.)
The FR-V's ADJUST_BREAKPOINT_ADDRESS
method will adjust a
breakpoint's address by scanning backwards for the beginning of
the bundle, returning the address of the bundle.
Since the adjustment of a breakpoint may significantly alter a user's
expectation, gdb prints a warning when an adjusted breakpoint
is initially set and each time that that breakpoint is hit.
CALL_DUMMY_LOCATION
This method has been replaced by push_dummy_code
(see push_dummy_code).
CANNOT_FETCH_REGISTER (
regno)
FETCH_INFERIOR_REGISTERS
is not defined.
CANNOT_STORE_REGISTER (
regno)
int CONVERT_REGISTER_P(
regnum)
DECR_PC_AFTER_BREAK
BREAKPOINT
, though not always. For most targets this value will be 0.
DISABLE_UNSETTABLE_BREAK (
addr)
PRINT_FLOAT_INFO()
print_registers_info (
gdbarch,
frame,
regnum,
all)
The default method prints one register per line, and if all is
zero omits floating-point registers.
PRINT_VECTOR_INFO()
By default, the `info vector' command will print all vector
registers (the register's type having the vector attribute).
DWARF_REG_TO_REGNUM
DWARF2_REG_TO_REGNUM
ECOFF_REG_TO_REGNUM
END_OF_TEXT_DEFAULT
EXTRACT_RETURN_VALUE(
type,
regbuf,
valbuf)
This method has been deprecated in favour of gdbarch_return_value
(see gdbarch_return_value).
DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS(
regbuf)
CORE_ADDR
at which a function should return
its structure value.
See gdbarch_return_value.
DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS_P()
DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS
.
DEPRECATED_FP_REGNUM
This should only need to be defined if DEPRECATED_TARGET_READ_FP
is not defined.
DEPRECATED_FRAMELESS_FUNCTION_INVOCATION(
fi)
frame_align (
address)
This function is used to ensure that, when creating a dummy frame, both the initial stack pointer and (if needed) the address of the return value are correctly aligned.
Unlike DEPRECATED_STACK_ALIGN
, this function always adjusts the
address in the direction of stack growth.
By default, no frame based stack alignment is performed.
int frame_red_zone_size
When performing an inferior function call, to ensure that it does not modify this area, gdb adjusts the innermost-stack-address by frame_red_zone_size bytes before pushing parameters onto the stack.
By default, zero bytes are allocated. The value must be aligned (see frame_align).
The amd64 (nee x86-64) abi documentation refers to the
red zone when describing this scratch area.
DEPRECATED_FRAME_CHAIN(
frame)
DEPRECATED_FRAME_CHAIN_VALID(
chain,
thisframe)
NULL
chain pointers, dummy frames, and frames whose PC values are inside the
startup file (e.g. crt0.o), inside main
, or inside
_start
.
DEPRECATED_FRAME_INIT_SAVED_REGS(
frame)
frame->saved_regs
. Space for
frame->saved_regs
shall be allocated by
DEPRECATED_FRAME_INIT_SAVED_REGS
using
frame_saved_regs_zalloc
.
FRAME_FIND_SAVED_REGS
is deprecated.
FRAME_NUM_ARGS (
fi)
-1
.
DEPRECATED_FRAME_SAVED_PC(
frame)
This method is deprecated. See unwind_pc.
CORE_ADDR unwind_pc (struct frame_info *
this_frame)
The implementation, which must be frame agnostic (work with any frame), is typically no more than:
ULONGEST pc; frame_unwind_unsigned_register (this_frame, D10V_PC_REGNUM, &pc); return d10v_make_iaddr (pc);
See DEPRECATED_FRAME_SAVED_PC, which this method replaces.
CORE_ADDR unwind_sp (struct frame_info *
this_frame)
The implementation, which must be frame agnostic (work with any frame), is typically no more than:
ULONGEST sp; frame_unwind_unsigned_register (this_frame, D10V_SP_REGNUM, &sp); return d10v_make_daddr (sp);
See TARGET_READ_SP, which this method replaces.
FUNCTION_EPILOGUE_SIZE
x_sym.x_misc.x_fsize
field of the
function end symbol is 0. For such targets, you must define
FUNCTION_EPILOGUE_SIZE
to expand into the standard size of a
function's epilogue.
DEPRECATED_FUNCTION_START_OFFSET
This is zero on almost all machines: the function's address is usually
the address of its first instruction. However, on the VAX, for
example, each function starts with two bytes containing a bitmask
indicating which registers to save upon entry to the function. The
VAX call
instructions check this value, and save the
appropriate registers automatically. Thus, since the offset from the
function's address to its first instruction is two bytes,
DEPRECATED_FUNCTION_START_OFFSET
would be 2 on the VAX.
GCC_COMPILED_FLAG_SYMBOL
GCC2_COMPILED_FLAG_SYMBOL
gcc_compiled.
and gcc2_compiled.
,
respectively. (Currently only defined for the Delta 68.)
_MULTI_ARCH
This support can be enabled at two levels. At level one, only
definitions for previously undefined macros are provided; at level two,
a multi-arch definition of all architecture dependent macros will be
defined.
_TARGET_IS_HPPA
GET_LONGJMP_TARGET
This macro determines the target PC address that longjmp
will jump to,
assuming that we have just stopped at a longjmp
breakpoint. It takes a
CORE_ADDR *
as argument, and stores the target PC value through this
pointer. It examines the current state of the machine as needed.
DEPRECATED_GET_SAVED_REGISTER
DEPRECATED_GET_SAVED_REGISTER
.
DEPRECATED_IBM6000_TARGET
I386_USE_GENERIC_WATCHPOINTS
SYMBOLS_CAN_START_WITH_DOLLAR
On HP-UX, certain system routines (millicode) have names beginning with
`$' or `$$'. For example, $$dyncall
is a millicode
routine that handles inter-space procedure calls on PA-RISC.
DEPRECATED_INIT_EXTRA_FRAME_INFO (
fromleaf,
frame)
frame->extra_info
. Space for frame->extra_info
is allocated using frame_extra_info_zalloc
.
DEPRECATED_INIT_FRAME_PC (
fromleaf,
prev)
INNER_THAN (
lhs,
rhs)
lhs < rhs
if
the target's stack grows downward in memory, or lhs > rsh
if the
stack grows upward.
gdbarch_in_function_epilogue_p (
gdbarch,
pc)
DEPRECATED_SIGTRAMP_START (
pc)
DEPRECATED_SIGTRAMP_END (
pc)
sigtramp
for the
given pc. On machines where the address is just a compile time
constant, the macro expansion will typically just ignore the supplied
pc.
IN_SOLIB_CALL_TRAMPOLINE (
pc,
name)
IN_SOLIB_RETURN_TRAMPOLINE (
pc,
name)
IN_SOLIB_DYNSYM_RESOLVE_CODE (
pc)
SKIP_SOLIB_RESOLVER (
pc)
INTEGER_TO_ADDRESS (
type,
buf)
Pragmatics: When the user copies a well defined expression from
their source code and passes it, as a parameter, to gdb's
print
command, they should get the same value as would have been
computed by the target program. Any deviation from this rule can cause
major confusion and annoyance, and needs to be justified carefully. In
other words, gdb doesn't really have the freedom to do these
conversions in clever and useful ways. It has, however, been pointed
out that users aren't complaining about how gdb casts integers
to pointers; they are complaining that they can't take an address from a
disassembly listing and give it to x/i
. Adding an architecture
method like INTEGER_TO_ADDRESS
certainly makes it possible for
gdb to “get it right” in all circumstances.
NO_HIF_SUPPORT
POINTER_TO_ADDRESS (
type,
buf)
REGISTER_CONVERTIBLE (
reg)
REGISTER_TO_VALUE(
regnum,
type,
from,
to)
DEPRECATED_REGISTER_RAW_SIZE (
reg)
register_reggroup_p (
gdbarch,
regnum,
reggroup)
By default, registers are grouped as follows:
float_reggroup
vector_reggroup
general_reggroup
save_reggroup
restore_reggroup
all_reggroup
DEPRECATED_REGISTER_VIRTUAL_SIZE (
reg)
DEPRECATED_REGISTER_VIRTUAL_TYPE (
reg)
struct type *register_type (
gdbarch,
reg)
DEPRECATED_REGISTER_VIRTUAL_TYPE
. See Raw and Virtual Register Representations.
REGISTER_CONVERT_TO_VIRTUAL(
reg,
type,
from,
to)
REGISTER_CONVERT_TO_RAW(
type,
reg,
from,
to)
const struct regset *regset_from_core_section (struct gdbarch *
gdbarch, const char *
sect_name, size_t
sect_size)
SOFTWARE_SINGLE_STEP_P()
SOFTWARE_SINGLE_STEP
must also be defined.
SOFTWARE_SINGLE_STEP(
signal,
insert_breapoints_p)
SOFUN_ADDRESS_MAYBE_MISSING
SOFUN_ADDRESS_MAYBE_MISSING
indicates that a particular set of
hacks of this sort are in use, affecting N_SO
and N_FUN
entries in stabs-format debugging information. N_SO
stabs mark
the beginning and ending addresses of compilation units in the text
segment. N_FUN
stabs mark the starts and ends of functions.
SOFUN_ADDRESS_MAYBE_MISSING
means two things:
N_FUN
stabs have an address of zero. Instead, you should find the
addresses where the function starts by taking the function name from
the stab, and then looking that up in the minsyms (the
linker/assembler symbol table). In other words, the stab has the
name, and the linker/assembler symbol table is the only place that carries
the address.
N_SO
stabs have an address of zero, too. You just look at the
N_FUN
stabs that appear before and after the N_SO
stab,
and guess the starting and ending addresses of the compilation unit from
them.
PC_LOAD_SEGMENT
PC_REGNUM
This should only need to be defined if TARGET_READ_PC
and
TARGET_WRITE_PC
are not defined.
PARM_BOUNDARY
stabs_argument_has_addr (
gdbarch,
type)
This method replaces DEPRECATED_REG_STRUCT_HAS_ADDR
(see DEPRECATED_REG_STRUCT_HAS_ADDR).
PROCESS_LINENUMBER_HOOK
PROLOGUE_FIRSTLINE_OVERLAP
PS_REGNUM
DEPRECATED_POP_FRAME
frame_pop
to remove a stack frame. This
method has been superseeded by generic code.
push_dummy_call (
gdbarch,
function,
regcache,
pc_addr,
nargs,
args,
sp,
struct_return,
struct_addr)
function is a pointer to a struct value
; on architectures that use
function descriptors, this contains the function descriptor value.
Returns the updated top-of-stack pointer.
This method replaces DEPRECATED_PUSH_ARGUMENTS
.
CORE_ADDR push_dummy_code (
gdbarch,
sp,
funaddr,
using_gcc,
args,
nargs,
value_type,
real_pc,
bp_addr)
Set bp_addr to the address at which the breakpoint instruction should be inserted, real_pc to the resume address when starting the call sequence, and return the updated inner-most stack address.
By default, the stack is grown sufficient to hold a frame-aligned (see frame_align) breakpoint, bp_addr is set to the address reserved for that breakpoint, and real_pc set to funaddr.
This method replaces CALL_DUMMY_LOCATION
,
DEPRECATED_REGISTER_SIZE
.
REGISTER_NAME(
i)
NULL
or NUL
to indicate that register i is not valid.
DEPRECATED_REG_STRUCT_HAS_ADDR (
gcc_p,
type)
This method has been replaced by stabs_argument_has_addr
(see stabs_argument_has_addr).
SAVE_DUMMY_FRAME_TOS (
sp)
SP
after both the dummy frame and space for parameters/results have been
allocated on the stack. See unwind_dummy_id.
SDB_REG_TO_REGNUM
enum return_value_convention gdbarch_return_value (struct gdbarch *
gdbarch, struct type *
valtype, struct regcache *
regcache, void *
readbuf, const void *
writebuf)
gdb currently recognizes two function return-value conventions:
RETURN_VALUE_REGISTER_CONVENTION
where the return value is found
in registers; and RETURN_VALUE_STRUCT_CONVENTION
where the return
value is found in memory and the address of that memory location is
passed in as the function's first parameter.
If the register convention is being used, and writebuf is
non-NULL
, also copy the return-value in writebuf into
regcache.
If the register convention is being used, and readbuf is
non-NULL
, also copy the return value from regcache into
readbuf (regcache contains a copy of the registers from the
just returned function).
See DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS, for a description of how return-values that use the struct convention are handled.
Maintainer note: This method replaces separate predicate, extract, store methods. By having only one method, the logic needed to determine the return-value convention need only be implemented in one place. If gdb were written in an oo language, this method would instead return an object that knew how to perform the register return-value extract and store.
Maintainer note: This method does not take a gcc_p
parameter, and such a parameter should not be added. If an architecture
that requires per-compiler or per-function information be identified,
then the replacement of rettype with struct value
function should be persued.
Maintainer note: The regcache parameter limits this methods
to the inner most frame. While replacing regcache with a
struct frame_info
frame parameter would remove that
limitation there has yet to be a demonstrated need for such a change.
SKIP_PERMANENT_BREAKPOINT
SKIP_PERMANENT_BREAKPOINT
adjusts the processor's
state so that execution will resume just after the breakpoint. This
macro does the right thing even when the breakpoint is in the delay slot
of a branch or jump.
SKIP_PROLOGUE (
pc)
SKIP_TRAMPOLINE_CODE (
pc)
SP_REGNUM
STAB_REG_TO_REGNUM
DEPRECATED_STACK_ALIGN (
addr)
Unlike frame_align, this function always adjusts addr upwards.
By default, no stack alignment is performed.
STEP_SKIPS_DELAY (
addr)
STORE_RETURN_VALUE (
type,
regcache,
valbuf)
This method has been deprecated in favour of gdbarch_return_value
(see gdbarch_return_value).
SYMBOL_RELOADING_DEFAULT
TARGET_CHAR_BIT
TARGET_CHAR_SIGNED
char
is normally signed on this architecture; zero if
it should be unsigned.
The ISO C standard requires the compiler to treat char
as
equivalent to either signed char
or unsigned char
; any
character in the standard execution set is supposed to be positive.
Most compilers treat char
as signed, but char
is unsigned
on the IBM S/390, RS6000, and PowerPC targets.
TARGET_COMPLEX_BIT
2 * TARGET_FLOAT_BIT
.
At present this macro is not used.
TARGET_DOUBLE_BIT
8 * TARGET_CHAR_BIT
.
TARGET_DOUBLE_COMPLEX_BIT
2 * TARGET_DOUBLE_BIT
.
At present this macro is not used.
TARGET_FLOAT_BIT
4 * TARGET_CHAR_BIT
.
TARGET_INT_BIT
4 * TARGET_CHAR_BIT
.
TARGET_LONG_BIT
4 * TARGET_CHAR_BIT
.
TARGET_LONG_DOUBLE_BIT
2 * TARGET_DOUBLE_BIT
.
TARGET_LONG_LONG_BIT
2 * TARGET_LONG_BIT
.
TARGET_PTR_BIT
TARGET_INT_BIT
.
TARGET_SHORT_BIT
2 * TARGET_CHAR_BIT
.
TARGET_READ_PC
TARGET_WRITE_PC (
val,
pid)
TARGET_READ_SP
TARGET_READ_FP
read_pc
,
write_pc
, and read_sp
. For most targets, these may be
left undefined. gdb will call the read and write register
functions with the relevant _REGNUM
argument.
These macros are useful when a target keeps one of these registers in a hard to get at place; for example, part in a segment register and part in an ordinary register.
See unwind_sp, which replaces TARGET_READ_SP
.
TARGET_VIRTUAL_FRAME_POINTER(
pc,
regp,
offsetp)
(register, offset)
pair representing the virtual frame
pointer in use at the code address pc. If virtual frame pointers
are not used, a default definition simply returns
DEPRECATED_FP_REGNUM
, with an offset of zero.
TARGET_HAS_HARDWARE_WATCHPOINTS
TARGET_PRINT_INSN (
addr,
info)
deprecated_tm_print_insn
. This usually points to a function in
the opcodes
library (see Opcodes).
info is a structure (of type disassemble_info
) defined in
include/dis-asm.h used to pass information to the instruction
decoding routine.
struct frame_id unwind_dummy_id (struct frame_info *
frame)
struct
frame_id
that uniquely identifies an inferior function call's dummy
frame. The value returned must match the dummy frame stack value
previously saved using SAVE_DUMMY_FRAME_TOS
.
See SAVE_DUMMY_FRAME_TOS.
DEPRECATED_USE_STRUCT_CONVENTION (
gcc_p,
type)
This method has been deprecated in favour of gdbarch_return_value
(see gdbarch_return_value).
VALUE_TO_REGISTER(
type,
regnum,
from,
to)
VARIABLES_INSIDE_BLOCK (
desc,
gcc_p)
n_desc
from the
N_RBRAC
symbol, and gcc_p is true if gdb has noticed the
presence of either the GCC_COMPILED_SYMBOL
or the
GCC2_COMPILED_SYMBOL
. By default, this is 0.
OS9K_VARIABLES_INSIDE_BLOCK (
desc,
gcc_p)
Motorola M68K target conditionals.
BPT_VECTOR
0xf
.
REMOTE_BPT_VECTOR
1
.
NAME_OF_MALLOC
The following files add a target to gdb:
You can also define `TM_CFLAGS', `TM_CLIBS', `TM_CDEPS',
but these are now deprecated, replaced by autoconf, and may go away in
future versions of gdb.
configure
). Contains
macro definitions about the target machine's registers, stack frame
format and instructions.
New targets do not need this file and should not create it.
New targets do not need this file and should not create it.
If you are adding a new operating system for an existing CPU chip, add a
config/tm-os.h file that describes the operating system
facilities that are unusual (extra symbol table info; the breakpoint
instruction needed; etc.). Then write a arch/tm-os.h
that just #include
s tm-arch.h and
config/tm-os.h.
This section describes the current accepted best practice for converting an existing target architecture to the multi-arch framework.
The process consists of generating, testing, posting and committing a sequence of patches. Each patch must contain a single change, for instance:
FRAME_INFO
).
There isn't a size limit on a patch, however, a developer is strongly encouraged to keep the patch size down.
Since each patch is well defined, and since each change has been tested and shows no regressions, the patches are considered fairly obvious. Such patches, when submitted by developers listed in the MAINTAINERS file, do not need approval. Occasional steps in the process may be more complicated and less clear. The developer is expected to use their judgment and is encouraged to seek advice as needed.
The first step is to establish control. Build (with -Werror enabled) and test the target so that there is a baseline against which the debugger can be compared.
At no stage can the test results regress or gdb stop compiling with -Werror.
The objective of this step is to establish the basic multi-arch framework. It involves
_gdbarch_init
function4 that creates
the architecture:
static struct gdbarch * d10v_gdbarch_init (info, arches) struct gdbarch_info info; struct gdbarch_list *arches; { struct gdbarch *gdbarch; /* there is only one d10v architecture */ if (arches != NULL) return arches->gdbarch; gdbarch = gdbarch_alloc (&info, NULL); return gdbarch; }
static void mips_dump_tdep (struct gdbarch *current_gdbarch, struct ui_file *file) { ... code to print architecture specific info ... }
_initialize_
arch_tdep
to register this new
architecture:
void _initialize_mips_tdep (void) { gdbarch_register (bfd_arch_mips, mips_gdbarch_init, mips_dump_tdep);
GDB_MULTI_ARCH
, defined as 0 (zero), to the fileSome mechanisms do not work with multi-arch. They include:
FRAME_FIND_SAVED_REGS
DEPRECATED_FRAME_INIT_SAVED_REGS
At this stage you could also consider converting the macros into functions.
Temporally set GDB_MULTI_ARCH
to GDB_MULTI_ARCH_PARTIAL
and then build and start gdb (the change should not be
committed). gdb may not build, and once built, it may die with
an internal error listing the architecture methods that must be
provided.
Fix any build problems (patch(es)).
Convert all the architecture methods listed, which are only macros, into functions (patch(es)).
Update arch_gdbarch_init
to set all the missing
architecture methods and wrap the corresponding macros in #if
!GDB_MULTI_ARCH
(patch(es)).
Change the value of GDB_MULTI_ARCH
to GDB_MULTI_ARCH_PARTIAL (a
single patch).
Any problems with throwing “the switch” should have been fixed already.
Suggest converting macros into functions (and setting the corresponding architecture method) in small batches.
This should go smoothly.
The tm-arch.h can be deleted. arch.mt and configure.in updated.
The target vector defines the interface between gdb's abstract handling of target systems, and the nitty-gritty code that actually exercises control over a process or a serial port. gdb includes some 30-40 different target vectors; however, each configuration of gdb includes only a few of them.
Both executables and core files have target vectors.
gdb's file remote.c talks a serial protocol to code that runs in the target system. gdb provides several sample stubs that can be integrated into target programs or operating systems for this purpose; they are named *-stub.c.
The gdb user's manual describes how to put such a stub into your target code. What follows is a discussion of integrating the SPARC stub into a complicated operating system (rather than a simple program), by Stu Grossman, the author of this stub.
The trap handling code in the stub assumes the following upon entry to
trap_low
:
As long as your trap handler can guarantee those conditions, then there
is no reason why you shouldn't be able to “share” traps with the stub.
The stub has no requirement that it be jumped to directly from the
hardware trap vector. That is why it calls exceptionHandler()
,
which is provided by the external environment. For instance, this could
set up the hardware traps to actually execute code which calls the stub
first, and then transfers to its own trap handler.
For the most point, there probably won't be much of an issue with
“sharing” traps, as the traps we use are usually not used by the kernel,
and often indicate unrecoverable error conditions. Anyway, this is all
controlled by a table, and is trivial to modify. The most important
trap for us is for ta 1
. Without that, we can't single step or
do breakpoints. Everything else is unnecessary for the proper operation
of the debugger/stub.
From reading the stub, it's probably not obvious how breakpoints work. They are simply done by deposit/examine operations from gdb.
Several files control gdb's configuration for native support:
Maintainer's note: The .mh suffix is because this file
originally contained Makefile fragments for hosting gdb
on machine xyz. While the file is no longer used for this
purpose, the .mh suffix remains. Perhaps someone will
eventually rename these fragments so that they have a .mn
suffix.
configure
). Contains C
macro definitions describing the native system environment, such as
child process control and core file support.
There are some “generic” versions of routines that can be used by
various systems. These can be customized in various ways by macros
defined in your nm-xyz.h file. If these routines work for
the xyz host, you can just include the generic file's name (with
`.o', not `.c') in NATDEPFILES
.
Otherwise, if your machine needs custom support routines, you will need
to write routines that perform the same functions as the generic file.
Put them into xyz-nat.c, and put xyz-nat.o
into NATDEPFILES
.
ptrace
call in a vanilla way.
register_addr()
, see below. Now that BFD is used to read core
files, virtually all machines should use core-aout.c
, and should
just provide fetch_core_registers
in xyz-nat.c
(or
REGISTER_U_ADDR
in nm-
xyz.h
).
nm-
xyz.h
file defines the macro
REGISTER_U_ADDR(addr, blockend, regno)
, it should be defined to
set addr
to the offset within the `user' struct of gdb
register number regno
. blockend
is the offset within the
“upage” of u.u_ar0
. If REGISTER_U_ADDR
is defined,
core-aout.c will define the register_addr()
function and
use the macro in it. If you do not define REGISTER_U_ADDR
, but
you are using the standard fetch_core_registers()
, you will need
to define your own version of register_addr()
, put it into your
xyz-nat.c
file, and be sure xyz-nat.o
is in
the NATDEPFILES
list. If you have your own
fetch_core_registers()
, you may not need a separate
register_addr()
. Many custom fetch_core_registers()
implementations simply locate the registers themselves.
When making gdb run native on a new operating system, to make it
possible to debug core files, you will need to either write specific
code for parsing your OS's core files, or customize
bfd/trad-core.c. First, use whatever #include
files your
machine uses to define the struct of registers that is accessible
(possibly in the u-area) in a core file (rather than
machine/reg.h), and an include file that defines whatever header
exists on a core file (e.g. the u-area or a struct core
). Then
modify trad_unix_core_file_p
to use these values to set up the
section information for the data segment, stack segment, any other
segments in the core file (perhaps shared library contents or control
information), “registers” segment, and if there are two discontiguous
sets of registers (e.g. integer and float), the “reg2” segment. This
section information basically delimits areas in the core file in a
standard way, which the section-reading routines in BFD know how to seek
around in.
Then back in gdb, you need a matching routine called
fetch_core_registers
. If you can use the generic one, it's in
core-aout.c; if not, it's in your xyz-nat.c file.
It will be passed a char pointer to the entire “registers” segment,
its length, and a zero; or a char pointer to the entire “regs2”
segment, its length, and a 2. The routine should suck out the supplied
register values and install them into gdb's “registers” array.
If your system uses /proc to control processes, and uses ELF format core files, then you may be able to use the same routines for reading the registers out of processes and out of core files.
When gdb is configured and compiled, various macros are defined or left undefined, to control compilation when the host and target systems are the same. These macros should be defined (or left undefined) in nm-system.h.
CHILD_PREPARE_TO_STORE
[Note that this is incorrectly defined in xm-system.h files
currently.]
FETCH_INFERIOR_REGISTERS
fetch_inferior_registers
and store_inferior_registers
in
host-nat.c. If this symbol is not defined, and
infptrace.c is included in this configuration, the default
routines in infptrace.c are used for these functions.
FP0_REGNUM
GET_LONGJMP_TARGET
This macro determines the target PC address that longjmp
will jump to,
assuming that we have just stopped at a longjmp breakpoint. It takes a
CORE_ADDR *
as argument, and stores the target PC value through this
pointer. It examines the current state of the machine as needed.
I386_USE_GENERIC_WATCHPOINTS
KERNEL_U_ADDR
u
structure (the “user
struct”, also known as the “u-page”) in kernel virtual memory. gdb
needs to know this so that it can subtract this address from absolute
addresses in the upage, that are obtained via ptrace or from core files.
On systems that don't need this value, set it to zero.
KERNEL_U_ADDR_HPUX
u
at
runtime, by using HP-style nlist
on the kernel's image in the
root directory.
ONE_PROCESS_WRITETEXT
PROC_NAME_FMT
PTRACE_ARG3_TYPE
ptrace
system call, if it
exists and is different from int
.
REGISTER_U_ADDR
SHELL_COMMAND_CONCAT
SHELL_FILE
"/bin/sh"
.
SOLIB_ADD (
filename,
from_tty,
targ,
readsyms)
SOLIB_CREATE_INFERIOR_HOOK
START_INFERIOR_TRAPS_EXPECTED
USE_PROC_FS
U_REGS_OFFSET
FETCH_INFERIOR_REGISTERS
is not defined). If
the default value from infptrace.c is good enough, leave it
undefined.
The default value means that u.u_ar0 points to the location of
the registers. I'm guessing that #define U_REGS_OFFSET 0
means
that u.u_ar0
is the location of the registers.
CLEAR_SOLIB
DEBUG_PTRACE
ptrace
calls.
BFD provides support for gdb in several ways:
The opcodes library provides gdb's disassembler. (It's a separate library because it's also used in binutils, for objdump).
The libiberty
library provides a set of functions and features
that integrate and improve on functionality found in modern operating
systems. Broadly speaking, such features can be divided into three
groups: supplemental functions (functions that may be missing in some
environments and operating systems), replacement functions (providing
a uniform and easier to use interface for commonly used standard
functions), and extensions (which provide additional functionality
beyond standard functions).
gdb uses various features provided by the libiberty
library, for instance the C++ demangler, the IEEE
floating format support functions, the input options parser
`getopt', the `obstack' extension, and other functions.
obstacks
in gdb
The obstack mechanism provides a convenient way to allocate and free
chunks of memory. Each obstack is a pool of memory that is managed
like a stack. Objects (of any nature, size and alignment) are
allocated and freed in a LIFO fashion on an obstack (see
libiberty
's documenatation for a more detailed explanation of
obstacks
).
The most noticeable use of the obstacks
in gdb is in
object files. There is an obstack associated with each internal
representation of an object file. Lots of things get allocated on
these obstacks
: dictionary entries, blocks, blockvectors,
symbols, minimal symbols, types, vectors of fundamental types, class
fields of types, object files section lists, object files section
offets lists, line tables, symbol tables, partial symbol tables,
string tables, symbol table private data, macros tables, debug
information sections and entries, import and export lists (som),
unwind information (hppa), dwarf2 location expressions data. Plus
various strings such as directory names strings, debug format strings,
names of types.
An essential and convenient property of all data on obstacks
is
that memory for it gets allocated (with obstack_alloc
) at
various times during a debugging sesssion, but it is released all at
once using the obstack_free
function. The obstack_free
function takes a pointer to where in the stack it must start the
deletion from (much like the cleanup chains have a pointer to where to
start the cleanups). Because of the stack like structure of the
obstacks
, this allows to free only a top portion of the
obstack. There are a few instances in gdb where such thing
happens. Calls to obstack_free
are done after some local data
is allocated to the obstack. Only the local data is deleted from the
obstack. Of course this assumes that nothing between the
obstack_alloc
and the obstack_free
allocates anything
else on the same obstack. For this reason it is best and safest to
use temporary obstacks
.
Releasing the whole obstack is also not safe per se. It is safe only
under the condition that we know the obstacks
memory is no
longer needed. In gdb we get rid of the obstacks
only
when we get rid of the whole objfile(s), for instance upon reading a
new symbol file.
C_ALLOCA
NFAILURES
RE_NREGS
SIGN_EXTEND_CHAR
SWITCH_ENUM_BUG
SYNTAX_TABLE
Sword
sparc
This chapter covers topics that are lower-level than the major algorithms of gdb.
Cleanups are a structured way to deal with things that need to be done later.
When your code does something (e.g., xmalloc
some memory, or
open
a file) that needs to be undone later (e.g., xfree
the memory or close
the file), it can make a cleanup. The
cleanup will be done at some future point: when the command is finished
and control returns to the top level; when an error occurs and the stack
is unwound; or when your code decides it's time to explicitly perform
cleanups. Alternatively you can elect to discard the cleanups you
created.
Syntax:
struct cleanup *
old_chain;
= make_cleanup (
function,
arg);
char *
) later. The result, old_chain, is a
handle that can later be passed to do_cleanups
or
discard_cleanups
. Unless you are going to call
do_cleanups
or discard_cleanups
, you can ignore the result
from make_cleanup
.
do_cleanups (
old_chain);
make_cleanup
call was made.
discard_cleanups (
old_chain);
do_cleanups
except that it just removes the cleanups from
the chain and does not call the specified functions.
Cleanups are implemented as a chain. The handle returned by
make_cleanups
includes the cleanup passed to the call and any
later cleanups appended to the chain (but not yet discarded or
performed). E.g.:
make_cleanup (a, 0); { struct cleanup *old = make_cleanup (b, 0); make_cleanup (c, 0) ... do_cleanups (old); }
will call c()
and b()
but will not call a()
. The
cleanup that calls a()
will remain in the cleanup chain, and will
be done later unless otherwise discarded.
Your function should explicitly do or discard the cleanups it creates. Failing to do this leads to non-deterministic behavior since the caller will arbitrarily do or discard your functions cleanups. This need leads to two common cleanup styles.
The first style is try/finally. Before it exits, your code-block calls
do_cleanups
with the old cleanup chain and thus ensures that your
code-block's cleanups are always performed. For instance, the following
code-segment avoids a memory leak problem (even when error
is
called and a forced stack unwind occurs) by ensuring that the
xfree
will always be called:
struct cleanup *old = make_cleanup (null_cleanup, 0); data = xmalloc (sizeof blah); make_cleanup (xfree, data); ... blah blah ... do_cleanups (old);
The second style is try/except. Before it exits, your code-block calls
discard_cleanups
with the old cleanup chain and thus ensures that
any created cleanups are not performed. For instance, the following
code segment, ensures that the file will be closed but only if there is
an error:
FILE *file = fopen ("afile", "r"); struct cleanup *old = make_cleanup (close_file, file); ... blah blah ... discard_cleanups (old); return file;
Some functions, e.g. fputs_filtered()
or error()
, specify
that they “should not be called when cleanups are not in place”. This
means that any actions you need to reverse in the case of an error or
interruption must be on the cleanup chain before you call these
functions, since they might never return to your code (they
`longjmp' instead).
The multi-arch framework includes a mechanism for adding module
specific per-architecture data-pointers to the struct gdbarch
architecture object.
A module registers one or more per-architecture data-pointers using:
pre_init is used to, on-demand, allocate an initial value for a per-architecture data-pointer using the architecture's obstack (passed in as a parameter). Since pre_init can be called during architecture creation, it is not parameterized with the architecture. and must not call modules that use per-architecture data.
post_init is used to obtain an initial value for a per-architecture data-pointer after. Since post_init is always called after architecture creation, it both receives the fully initialized architecture and is free to call modules that use per-architecture data (care needs to be taken to ensure that those other modules do not try to call back to this module as that will create in cycles in the initialization call graph).
These functions return a struct gdbarch_data
that is used to
identify the per-architecture data-pointer added for that module.
The per-architecture data-pointer is accessed using the function:
Given the architecture arch and module data handle data_handle (returned by
gdbarch_data_register_pre_init
orgdbarch_data_register_post_init
), this function returns the current value of the per-architecture data-pointer. If the data pointer isNULL
, it is first initialized by calling the corresponding pre_init or post_init method.
The examples below assume the following definitions:
struct nozel { int total; }; static struct gdbarch_data *nozel_handle;
A module can extend the architecture vector, adding additional per-architecture data, using the pre_init method. The module's per-architecture data is then initialized during architecture creation.
In the below, the module's per-architecture nozel is added. An
architecture can specify its nozel by calling set_gdbarch_nozel
from gdbarch_init
.
static void * nozel_pre_init (struct obstack *obstack) { struct nozel *data = OBSTACK_ZALLOC (obstack, struct nozel); return data; }
extern void set_gdbarch_nozel (struct gdbarch *gdbarch, int total) { struct nozel *data = gdbarch_data (gdbarch, nozel_handle); data->total = nozel; }
A module can on-demand create architecture dependant data structures
using post_init
.
In the below, the nozel's total is computed on-demand by
nozel_post_init
using information obtained from the
architecture.
static void * nozel_post_init (struct gdbarch *gdbarch) { struct nozel *data = GDBARCH_OBSTACK_ZALLOC (gdbarch, struct nozel); nozel->total = gdbarch... (gdbarch); return data; }
extern int nozel_total (struct gdbarch *gdbarch) { struct nozel *data = gdbarch_data (gdbarch, nozel_handle); return data->total; }
Output that goes through printf_filtered
or fputs_filtered
or fputs_demangled
needs only to have calls to wrap_here
added in places that would be good breaking points. The utility
routines will take care of actually wrapping if the line width is
exceeded.
The argument to wrap_here
is an indentation string which is
printed only if the line breaks there. This argument is saved
away and used later. It must remain valid until the next call to
wrap_here
or until a newline has been printed through the
*_filtered
functions. Don't pass in a local variable and then
return!
It is usually best to call wrap_here
after printing a comma or
space. If you call it before printing a space, make sure that your
indentation properly accounts for the leading space that will print if
the line wraps there.
Any function or set of functions that produce filtered output must
finish by printing a newline, to flush the wrap buffer, before switching
to unfiltered (printf
) output. Symbol reading routines that
print warnings are a good example.
gdb follows the GNU coding standards, as described in etc/standards.texi. This file is also available for anonymous FTP from GNU archive sites. gdb takes a strict interpretation of the standard; in general, when the GNU standard recommends a practice but does not require it, gdb requires it.
gdb follows an additional set of coding standards specific to gdb, as described in the following sections.
gdb assumes an ISO/IEC 9899:1990 (a.k.a. ISO C90) compliant compiler.
gdb does not assume an ISO C or POSIX compliant C library.
gdb does not use the functions malloc
, realloc
,
calloc
, free
and asprintf
.
gdb uses the functions xmalloc
, xrealloc
and
xcalloc
when allocating memory. Unlike malloc
et.al.
these functions do not return when the memory pool is empty. Instead,
they unwind the stack using cleanups. These functions return
NULL
when requested to allocate a chunk of memory of size zero.
Pragmatics: By using these functions, the need to check every memory allocation is removed. These functions provide portable behavior.
gdb does not use the function free
.
gdb uses the function xfree
to return memory to the
memory pool. Consistent with ISO-C, this function ignores a request to
free a NULL
pointer.
Pragmatics: On some systems free
fails when passed a
NULL
pointer.
gdb can use the non-portable function alloca
for the
allocation of small temporary values (such as strings).
Pragmatics: This function is very non-portable. Some systems restrict the memory being allocated to no more than a few kilobytes.
gdb uses the string function xstrdup
and the print
function xstrprintf
.
Pragmatics: asprintf
and strdup
can fail. Print
functions such as sprintf
are very prone to buffer overflow
errors.
With few exceptions, developers should include the configuration option `--enable-gdb-build-warnings=,-Werror' when building gdb. The exceptions are listed in the file gdb/MAINTAINERS.
This option causes gdb (when built using GCC) to be compiled with a carefully selected list of compiler warning flags. Any warnings from those flags being treated as errors.
The current list of warning flags includes:
format printf
attribute on all
printf
like functions these check not just printf
calls
but also calls to functions such as fprintf_unfiltered
.
if
statement.
case
reserved word in a switch statement:
enum { FD_SCHEDULED, NOTHING_SCHEDULED } sched; ... switch (sched) { case FD_SCHEDULED: ...; break; NOTHING_SCHEDULED: ...; break; }
Pragmatics: Due to the way that gdb is implemented most
functions have unused parameters. Consequently the warning
`-Wunused-parameter' is precluded from the list. The macro
ATTRIBUTE_UNUSED
is not used as it leads to false negatives —
it is not an error to have ATTRIBUTE_UNUSED
on a parameter that
is being used. The options `-Wall' and `-Wunused' are also
precluded because they both include `-Wunused-parameter'.
Pragmatics: gdb has not simply accepted the warnings enabled by `-Wall -Werror -W...'. Instead it is selecting warnings when and where their benefits can be demonstrated.
The standard GNU recommendations for formatting must be followed strictly.
A function declaration should not have its name in column zero. A function definition should have its name in column zero.
/* Declaration */ static void foo (void); /* Definition */ void foo (void) { }
Pragmatics: This simplifies scripting. Function definitions can be found using `^function-name'.
There must be a space between a function or macro name and the opening parenthesis of its argument list (except for macro definitions, as required by C). There must not be a space after an open paren/bracket or before a close paren/bracket.
While additional whitespace is generally helpful for reading, do not use
more than one blank line to separate blocks, and avoid adding whitespace
after the end of a program line (as of 1/99, some 600 lines had
whitespace after the semicolon). Excess whitespace causes difficulties
for diff
and patch
utilities.
Pointers are declared using the traditional K&R C style:
void *foo;
and not:
void * foo; void* foo;
The standard GNU requirements on comments must be followed strictly.
Block comments must appear in the following form, with no /*
- or
*/
-only lines, and no leading *
:
/* Wait for control to return from inferior to debugger. If inferior
gets a signal, we may decide to start it up again instead of
returning. That is why there is a loop in this function. When
this function actually returns it means the inferior should be left
stopped and gdb should read more commands. */
(Note that this format is encouraged by Emacs; tabbing for a multi-line comment works correctly, and M-q fills the block consistently.)
Put a blank line between the block comments preceding function or variable definitions, and the definition itself.
In general, put function-body comments on lines by themselves, rather than trying to fit them into the 20 characters left at the end of a line, since either the comment or the code will inevitably get longer than will fit, and then somebody will have to move it anyhow.
Code must not depend on the sizes of C data types, the format of the host's floating point numbers, the alignment of anything, or the order of evaluation of expressions.
Use functions freely. There are only a handful of compute-bound areas in gdb that might be affected by the overhead of a function call, mainly in symbol reading. Most of gdb's performance is limited by the target interface (whether serial line or system call).
However, use functions with moderation. A thousand one-line functions are just as hard to understand as a single thousand-line function.
Macros are bad, M'kay. (But if you have to use a macro, make sure that the macro arguments are protected with parentheses.)
Declarations like `struct foo *' should be used in preference to declarations like `typedef struct foo { ... } *foo_ptr'.
Prototypes must be used when both declaring and defining a function. Prototypes for gdb functions must include both the argument type and name, with the name matching that used in the actual function definition.
All external functions should have a declaration in a header file that
callers include, except for _initialize_*
functions, which must
be external so that init.c construction works, but shouldn't be
visible to random source files.
Where a source file needs a forward declaration of a static function, that declaration must appear in a block near the top of the source file.
During its execution, gdb can encounter two types of errors. User errors and internal errors. User errors include not only a user entering an incorrect command but also problems arising from corrupt object files and system errors when interacting with the target. Internal errors include situations where gdb has detected, at run time, a corrupt or erroneous situation.
When reporting an internal error, gdb uses
internal_error
and gdb_assert
.
gdb must not call abort
or assert
.
Pragmatics: There is no internal_warning
function. Either
the code detected a user error, recovered from it and issued a
warning
or the code failed to correctly recover from the user
error and issued an internal_error
.
Any file used when building the core of gdb must be in lower case. Any file used when building the core of gdb must be 8.3 unique. These requirements apply to both source and generated files.
Pragmatics: The core of gdb must be buildable on many platforms including DJGPP and MacOS/HFS. Every time an unfriendly file is introduced to the build process both Makefile.in and configure.in need to be modified accordingly. Compare the convoluted conversion process needed to transform COPYING into copying.c with the conversion needed to transform version.in into version.c.
Any file non 8.3 compliant file (that is not used when building the core of gdb) must be added to gdb/config/djgpp/fnchange.lst.
Pragmatics: This is clearly a compromise.
When gdb has a local version of a system header file (ex string.h) the file name based on the POSIX header prefixed with gdb_ (gdb_string.h). These headers should be relatively independent: they should use only macros defined by configure, the compiler, or the host; they should include only system headers; they should refer only to system types. They may be shared between multiple programs, e.g. gdb and gdbserver.
For other files `-' is used as the separator.
A .c file should include defs.h first.
A .c file should directly include the .h
file of every
declaration and/or definition it directly refers to. It cannot rely on
indirect inclusion.
A .h file should directly include the .h
file of every
declaration and/or definition it directly refers to. It cannot rely on
indirect inclusion. Exception: The file defs.h does not need to
be directly included.
An external declaration should only appear in one include file.
An external declaration should never appear in a .c
file.
Exception: a declaration for the _initialize
function that
pacifies -Wmissing-declaration.
A typedef
definition should only appear in one include file.
An opaque struct
declaration can appear in multiple .h
files. Where possible, a .h file should use an opaque
struct
declaration instead of an include.
All .h files should be wrapped in:
#ifndef INCLUDE_FILE_NAME_H #define INCLUDE_FILE_NAME_H header body #endif
In addition to getting the syntax right, there's the little question of semantics. Some things are done in certain ways in gdb because long experience has shown that the more obvious ways caused various kinds of trouble.
You can't assume the byte order of anything that comes from a target
(including values, object files, and instructions). Such things
must be byte-swapped using SWAP_TARGET_AND_HOST
in
gdb, or one of the swap routines defined in bfd.h,
such as bfd_get_32
.
You can't assume that you know what interface is being used to talk to
the target system. All references to the target must go through the
current target_ops
vector.
You can't assume that the host and target machines are the same machine (except in the “native” support modules). In particular, you can't assume that the target machine's header files will be available on the host machine. Target code must bring along its own header files – written from scratch or explicitly donated by their owner, to avoid copyright problems.
Insertion of new #ifdef
's will be frowned upon. It's much better
to write the code portably than to conditionalize it for various
systems.
New #ifdef
's which test for specific compilers or manufacturers
or operating systems are unacceptable. All #ifdef
's should test
for features. The information about which configurations contain which
features should be segregated into the configuration files. Experience
has proven far too often that a feature unique to one particular system
often creeps into other systems; and that a conditional based on some
predefined macro for your current system will become worthless over
time, as new versions of your system come out that behave differently
with regard to this feature.
Adding code that handles specific architectures, operating systems, target interfaces, or hosts, is not acceptable in generic code.
One particularly notorious area where system dependencies tend to creep in is handling of file names. The mainline gdb code assumes Posix semantics of file names: absolute file names begin with a forward slash /, slashes are used to separate leading directories, case-sensitive file names. These assumptions are not necessarily true on non-Posix systems such as MS-Windows. To avoid system-dependent code where you need to take apart or construct a file name, use the following portable macros:
HAVE_DOS_BASED_FILE_SYSTEM
IS_DIR_SEPARATOR (
c)
IS_ABSOLUTE_PATH (
file)
FILENAME_CMP (
f1,
f2)
strcmp
; on case-insensitive filesystems it
will call strcasecmp
instead.
DIRNAME_SEPARATOR
PATH
-style lists, typically held in environment variables.
This character is `:' on Unix, `;' on DOS and Windows.
SLASH_STRING
SLASH_STRING
is "/"
on most systems, but might be
"\\"
for some Windows-based ports.
In addition to using these macros, be sure to use portable library
functions whenever possible. For example, to extract a directory or a
basename part from a file name, use the dirname
and
basename
library functions (available in libiberty
for
platforms which don't provide them), instead of searching for a slash
with strrchr
.
Another way to generalize gdb along a particular interface is with an
attribute struct. For example, gdb has been generalized to handle
multiple kinds of remote interfaces—not by #ifdef
s everywhere, but
by defining the target_ops
structure and having a current target (as
well as a stack of targets below it, for memory references). Whenever
something needs to be done that depends on which remote interface we are
using, a flag in the current target_ops structure is tested (e.g.,
target_has_stack
), or a function is called through a pointer in the
current target_ops structure. In this way, when a new remote interface
is added, only one module needs to be touched—the one that actually
implements the new remote interface. Other examples of
attribute-structs are BFD access to multiple kinds of object file
formats, or gdb's access to multiple source languages.
Please avoid duplicating code. For example, in gdb 3.x all
the code interfacing between ptrace
and the rest of
gdb was duplicated in *-dep.c, and so changing
something was very painful. In gdb 4.x, these have all been
consolidated into infptrace.c. infptrace.c can deal
with variations between systems the same way any system-independent
file would (hooks, #if defined
, etc.), and machines which are
radically different don't need to use infptrace.c at all.
All debugging code must be controllable using the `set debug
module' command. Do not use printf
to print trace
messages. Use fprintf_unfiltered(gdb_stdlog, ...
. Do not use
#ifdef DEBUG
.
Most of the work in making gdb compile on a new machine is in
specifying the configuration of the machine. This is done in a
dizzying variety of header files and configuration scripts, which we
hope to make more sensible soon. Let's say your new host is called an
xyz (e.g., `sun4'), and its full three-part configuration
name is arch-
xvend-
xos (e.g.,
`sparc-sun-sunos4'). In particular:
-
xvend-
xos. You can test your changes by
running
./config.sub xyz
and
./config.sub arch-
xvend-
xos
which should both respond with arch-
xvend-
xos
and no error messages.
You need to port BFD, if that hasn't been done already. Porting BFD is beyond the scope of this manual.
gdb_host
to xyz, and (unless your
desired target is already available) also edit gdb/configure.tgt,
setting gdb_target
to something appropriate (for instance,
xyz).
Maintainer's note: Work in progress. The file gdb/configure.host originally needed to be modified when either a new native target or a new host machine was being added to gdb. Recent changes have removed this requirement. The file now only needs to be modified when adding a new native configuration. This will likely changed again in the future.
gdb's version is determined by the file gdb/version.in and takes one of the following forms:
gdb's mainline uses the major and minor version numbers from the most recent release branch, with a patchlevel of 50. At the time each new release branch is created, the mainline's major and minor version numbers are updated.
gdb's release branch is similar. When the branch is cut, the patchlevel is changed from 50 to 90. As draft releases are drawn from the branch, the patchlevel is incremented. Once the first release (major.minor) has been made, the patchlevel is set to 0 and updates have an incremented patchlevel.
For snapshots, and cvs check outs, it is also possible to identify the cvs origin:
If the previous gdb version is 6.1 and the current version is 6.2, then, substituting 6 for major and 1 or 2 for minor, here's an illustration of a typical sequence:
<HEAD> | 6.1.50.20020302-cvs | +--------------------------. | <gdb_6_2-branch> | | 6.2.50.20020303-cvs 6.1.90 (draft #1) | | 6.2.50.20020304-cvs 6.1.90.20020304-cvs | | 6.2.50.20020305-cvs 6.1.91 (draft #2) | | 6.2.50.20020306-cvs 6.1.91.20020306-cvs | | 6.2.50.20020307-cvs 6.2 (release) | | 6.2.50.20020308-cvs 6.2.0.20020308-cvs | | 6.2.50.20020309-cvs 6.2.1 (update) | | 6.2.50.20020310-cvs <branch closed> | 6.2.50.20020311-cvs | +--------------------------. | <gdb_6_3-branch> | | 6.3.50.20020312-cvs 6.2.90 (draft #1) | |
gdb draws a release series (6.2, 6.2.1, ...) from a single release branch, and identifies that branch using the cvs branch tags:
gdb_major_minor-YYYYMMDD-branchpoint gdb_major_minor-branch gdb_major_minor-YYYYMMDD-release
Pragmatics: To help identify the date at which a branch or release is made, both the branchpoint and release tags include the date that they are cut (YYYYMMDD) in the tag. The branch tag, denoting the head of the branch, does not need this.
To avoid version conflicts, vendors are expected to modify the file gdb/version.in to include a vendor unique alphabetic identifier (an official gdb release never uses alphabetic characters in its version identifer). E.g., `6.2widgit2', or `6.2 (Widgit Inc Patch 2)'.
gdb permits the creation of branches, cut from the cvs repository, for experimental development. Branches make it possible for developers to share preliminary work, and maintainers to examine significant new developments.
The following are a set of guidelines for creating such branches:
gdb
should be specified when creating a
branch (branches of individual files should be avoided). See Tags.
To simplify the identification of gdb branches, the following branch tagging convention is strongly recommended:
_
name-
YYYYMMDD-branchpoint
_
name-
YYYYMMDD-branch
cvs rtag owner_name-YYYYMMDD-branchpoint gdb cvs rtag -b -r owner_name-YYYYMMDD-branchpoint \ owner_name-YYYYMMDD-branch gdb
_
name-
yyyymmdd-mergepoint
cvs rtag owner_name-yyyymmdd-mergepoint gdb cvs update \ -jowner_name-YYYYMMDD-branchpoint -jowner_name-yyyymmdd-mergepoint
Similar sequences can be used to just merge in changes since the last merge.
For further information on cvs, see Concurrent Versions System.
The branch commit policy is pretty slack. gdb releases 5.0, 5.1 and 5.2 all used the below:
Pragmatics: Provided updates are restricted to non-core functionality there is little chance that a broken change will be fatal. This means that changes such as adding a new architectures or (within reason) support for a new host are considered acceptable.
Before anything else, poke the other developers (and around the source code) to see if there is anything that can be removed from gdb (an old target, an unused file).
Obsolete code is identified by adding an OBSOLETE
prefix to every
line. Doing this means that it is easy to identify something that has
been obsoleted when greping through the sources.
The process is done in stages — this is mainly to ensure that the wider gdb community has a reasonable opportunity to respond. Remember, everything on the Internet takes a week.
OBSOLETE
.
Maintainer note: While removing old code is regrettable it is hopefully better for gdb's long term development. Firstly it helps the developers by removing code that is either no longer relevant or simply wrong. Secondly since it removes any history associated with the file (effectively clearing the slate) the developer has a much freer hand when it comes to fixing broken files.
The most important objective at this stage is to find and fix simple changes that become a pain to track once the branch is created. For instance, configuration problems that stop gdb from even building. If you can't get the problem fixed, document it in the gdb/PROBLEMS file.
People always forget. Send a post reminding them but also if you know
something interesting happened add it yourself. The schedule
script will mention this in its e-mail.
Grab one of the nightly snapshots and then walk through the
gdb/README looking for anything that can be improved. The
schedule
script will mention this in its e-mail.
A number of files are taken from external repositories. They include:
A.R.I. is an awk
script
(Awk Regression Index ;-) that checks for a number of errors and coding
conventions. The checks include things like using malloc
instead
of xmalloc
and file naming problems. There shouldn't be any
regressions.
Close anything obviously fixed.
The targets are listed in gdb/MAINTAINERS.
$ u=5.1 $ v=5.2 $ V=`echo $v | sed 's/\./_/g'` $ D=`date -u +%Y-%m-%d` $ echo $u $V $D 5.1 5_2 2002-03-03 $ echo cvs -f -d :ext:sources.redhat.com:/cvs/src rtag \ -D $D-gmt gdb_$V-$D-branchpoint insight+dejagnu cvs -f -d :ext:sources.redhat.com:/cvs/src rtag -D 2002-03-03-gmt gdb_5_2-2002-03-03-branchpoint insight+dejagnu $ ^echo ^^ ... $ echo cvs -f -d :ext:sources.redhat.com:/cvs/src rtag \ -b -r gdb_$V-$D-branchpoint gdb_$V-branch insight+dejagnu cvs -f -d :ext:sources.redhat.com:/cvs/src rtag \ -b -r gdb_5_2-2002-03-03-branchpoint gdb_5_2-branch insight+dejagnu $ ^echo ^^ ... $
$ u=5.1 $ v=5.2 $ V=`echo $v | sed 's/\./_/g'` $ echo $u $v$V 5.1 5_2 $ cd /tmp $ echo cvs -f -d :ext:sources.redhat.com:/cvs/src co \ -r gdb_$V-branch src/gdb/version.in cvs -f -d :ext:sources.redhat.com:/cvs/src co -r gdb_5_2-branch src/gdb/version.in $ ^echo ^^ U src/gdb/version.in $ cd src/gdb $ echo $u.90-0000-00-00-cvs > version.in $ cat version.in 5.1.90-0000-00-00-cvs $ cvs -f commit version.in
Something?
The file gdbadmin/cron/crontab contains gdbadmin's cron table. This file needs to be updated so that:
See the file gdbadmin/cron/README for how to install the updated cron table.
The file gdbadmin/ss/README should also be reviewed to reflect any changes. That file is copied to both the branch/ and current/ snapshot directories.
The NEWS file needs to be updated so that on the branch it refers to changes in the current release while on the trunk it also refers to changes since the current release.
The README file needs to be updated so that it refers to the current release.
Send an announcement to the mailing lists:
Pragmatics: The branch creation is sent to the announce list to ensure that people people not subscribed to the higher volume discussion list are alerted.
The announcement should include:
Something goes here.
The process of creating and then making available a release is broken down into a number of stages. The first part addresses the technical process of creating a releasable tar ball. The later stages address the process of releasing that tar ball.
When making a release candidate just the first section is needed.
The objective at this stage is to create a set of tar balls that can be made available as a formal release (or as a less formal release candidate).
Send out an e-mail notifying everyone that the branch is frozen to gdb-patches@sources.redhat.com.
$ b=gdb_5_2-branch $ v=5.2 $ t=/sourceware/snapshot-tmp/gdbadmin-tmp $ echo $t/$b/$v /sourceware/snapshot-tmp/gdbadmin-tmp/gdb_5_2-branch/5.2 $ mkdir -p $t/$b/$v $ cd $t/$b/$v $ pwd /sourceware/snapshot-tmp/gdbadmin-tmp/gdb_5_2-branch/5.2 $ which autoconf /home/gdbadmin/bin/autoconf $
Notes:
autoconf
version carefully. You want to be using the
version taken from the binutils snapshot directory, which can be
found at ftp://sources.redhat.com/pub/binutils/. It is very
unlikely that a system installed version of autoconf
(e.g.,
/usr/bin/autoconf) is correct.
$ for m in gdb insight dejagnu do ( mkdir -p $m && cd $m && cvs -q -f -d /cvs/src co -P -r $b $m ) done $
Note:
cvs
really does.
Don't forget to include the ChangeLog entry.
$ emacs gdb/src/gdb/NEWS ... c-x 4 a ... c-x c-s c-x c-c $ cp gdb/src/gdb/NEWS insight/src/gdb/NEWS $ cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog
$ emacs gdb/src/gdb/README ... c-x 4 a ... c-x c-s c-x c-c $ cp gdb/src/gdb/README insight/src/gdb/README $ cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog
Maintainer note: Hopefully the README file was reviewed before the initial branch was cut so just a simple substitute is needed to get it updated.
Maintainer note: Other projects generate README and
INSTALL from the core documentation. This might be worth
pursuing.
$ echo $v > gdb/src/gdb/version.in $ cat gdb/src/gdb/version.in 5.2 $ emacs gdb/src/gdb/version.in ... c-x 4 a ... Bump to version ... c-x c-s c-x c-c $ cp gdb/src/gdb/version.in insight/src/gdb/version.in $ cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog
AM_INIT_AUTOMAKE
. Tweak it to read something like gdb-5.1.91.
Don't forget to re-generate configure.
Don't forget to include a ChangeLog entry.
$ emacs dejagnu/src/dejagnu/configure.in ... c-x 4 a ... c-x c-s c-x c-c $ ( cd dejagnu/src/dejagnu && autoconf )
This is identical to the process used to create the daily snapshot.
$ for m in gdb insight do ( cd $m/src && gmake -f src-release $m.tar ) done $ ( m=dejagnu; cd $m/src && gmake -f src-release $m.tar.bz2 )
If the top level source directory does not have src-release (gdb version 5.3.1 or earlier), try these commands instead:
$ for m in gdb insight do ( cd $m/src && gmake -f Makefile.in $m.tar ) done $ ( m=dejagnu; cd $m/src && gmake -f Makefile.in $m.tar.bz2 )
You're looking for files that have mysteriously disappeared. distclean has the habit of deleting files it shouldn't. Watch out for the version.in update cronjob.
$ ( cd gdb/src && cvs -f -q -n update ) M djunpack.bat ? gdb-5.1.91.tar ? proto-toplev ... lots of generated files ... M gdb/ChangeLog M gdb/NEWS M gdb/README M gdb/version.in ... lots of generated files ... $
Don't worry about the gdb.info-?? or gdb/p-exp.tab.c. They were generated (and yes gdb.info-1 was also generated only something strange with CVS means that they didn't get supressed). Fixing it would be nice though.
$ cp */src/*.tar . $ cp */src/*.bz2 . $ ls -F dejagnu/ dejagnu-gdb-5.2.tar.bz2 gdb/ gdb-5.2.tar insight/ insight-5.2.tar $ for m in gdb insight do bzip2 -v -9 -c $m-$v.tar > $m-$v.tar.bz2 gzip -v -9 -c $m-$v.tar > $m-$v.tar.gz done $
Note:
gzip
does not know the name of the file and, hence,
can not include it in the compressed file. This is also why the release
process runs tar
and bzip2
as separate passes.
Pick a popular machine (Solaris/PPC?) and try the build on that.
$ bunzip2 < gdb-5.2.tar.bz2 | tar xpf - $ cd gdb-5.2 $ ./configure $ make ... $ ./gdb/gdb ./gdb/gdb GNU gdb 5.2 ... (gdb) b main Breakpoint 1 at 0x80732bc: file main.c, line 734. (gdb) run Starting program: /tmp/gdb-5.2/gdb/gdb Breakpoint 1, main (argc=1, argv=0xbffff8b4) at main.c:734 734 catch_errors (captured_main, &args, "", RETURN_MASK_ALL); (gdb) print args $1 = {argc = 136426532, argv = 0x821b7f0} (gdb)
If this is a release candidate then the only remaining steps are:
(And you thought all that was required was to post an e-mail.)
Copy the new files to both the release and the old release directory:
$ cp *.bz2 *.gz ~ftp/pub/gdb/old-releases/ $ cp *.bz2 *.gz ~ftp/pub/gdb/releases
Clean up the releases directory so that only the most recent releases are available (e.g. keep 5.2 and 5.2.1 but remove 5.1):
$ cd ~ftp/pub/gdb/releases $ rm ...
Update the file README and .message in the releases directory:
$ vi README ... $ rm -f .message $ ln README .message
index.sh
.
cron
jobs and then just edit accordingly.
Something like:
$ ~/ss/update-web-docs \ ~ftp/pub/gdb/releases/gdb-5.2.tar.bz2 \ $PWD/www \ /www/sourceware/htdocs/gdb/download/onlinedocs \ gdb
$ /bin/sh ~/ss/update-web-ari \ ~ftp/pub/gdb/releases/gdb-5.2.tar.bz2 \ $PWD/www \ /www/sourceware/htdocs/gdb/download/ari \ gdb
Something goes here.
At the time of writing, the GNU machine was gnudist.gnu.org in ~ftp/gnu/gdb.
Post the ANNOUNCEMENT file you created above to:
The release is out but you're still not finished.
In particular you'll need to commit any changes to:
Something like:
$ d=`date -u +%Y-%m-%d` $ echo $d 2002-01-24 $ ( cd insight/src/gdb && cvs -f -q update ) $ ( cd insight/src && cvs -f -q tag gdb_5_2-$d-release )
Insight is used since that contains more of the release than
gdb (dejagnu
doesn't get tagged but I think we can live
with that).
Just put something in the ChangeLog so that the trunk also indicates when the release was made.
If gdb/version.in does not contain an ISO date such as
2002-01-24 then the daily cronjob
won't update it. Having
committed all the release changes it can be set to
5.2.0_0000-00-00-cvs which will restart things (yes the _
is important - it affects the snapshot process).
Don't forget the ChangeLog.
The files committed to the branch may also need changes merged into the trunk.
Post a revised release schedule to GDB Discussion List with an updated announcement. The schedule can be generated by running:
$ ~/ss/schedule `date +%s` schedule
The first parameter is approximate date/time in seconds (from the epoch) of the most recent release.
Also update the schedule cronjob
.
Remove any OBSOLETE
code.
The testsuite is an important component of the gdb package. While it is always worthwhile to encourage user testing, in practice this is rarely sufficient; users typically use only a small subset of the available commands, and it has proven all too common for a change to cause a significant regression that went unnoticed for some time.
The gdb testsuite uses the DejaGNU testing framework.
DejaGNU is built using Tcl
and expect
. The tests
themselves are calls to various Tcl
procs; the framework runs all the
procs and summarizes the passes and fails.
To run the testsuite, simply go to the gdb object directory (or to the
testsuite's objdir) and type make check
. This just sets up some
environment variables and invokes DejaGNU's runtest
script. While
the testsuite is running, you'll get mentions of which test file is in use,
and a mention of any unexpected passes or fails. When the testsuite is
finished, you'll get a summary that looks like this:
=== gdb Summary === # of expected passes 6016 # of unexpected failures 58 # of unexpected successes 5 # of expected failures 183 # of unresolved testcases 3 # of untested testcases 5
The ideal test run consists of expected passes only; however, reality conspires to keep us from this ideal. Unexpected failures indicate real problems, whether in gdb or in the testsuite. Expected failures are still failures, but ones which have been decided are too hard to deal with at the time; for instance, a test case might work everywhere except on AIX, and there is no prospect of the AIX case being fixed in the near future. Expected failures should not be added lightly, since you may be masking serious bugs in gdb. Unexpected successes are expected fails that are passing for some reason, while unresolved and untested cases often indicate some minor catastrophe, such as the compiler being unable to deal with a test program.
When making any significant change to gdb, you should run the testsuite before and after the change, to confirm that there are no regressions. Note that truly complete testing would require that you run the testsuite with all supported configurations and a variety of compilers; however this is more than really necessary. In many cases testing with a single configuration is sufficient. Other useful options are to test one big-endian (Sparc) and one little-endian (x86) host, a cross config with a builtin simulator (powerpc-eabi, mips-elf), or a 64-bit host (Alpha).
If you add new functionality to gdb, please consider adding tests for it as well; this way future gdb hackers can detect and fix their changes that break the functionality you added. Similarly, if you fix a bug that was not previously reported as a test failure, please add a test case for it. Some cases are extremely difficult to test, such as code that handles host OS failures or bugs in particular versions of compilers, and it's OK not to try to write tests for all of those.
DejaGNU supports separate build, host, and target machines. However, some gdb test scripts do not work if the build machine and the host machine are not the same. In such an environment, these scripts will give a result of “UNRESOLVED”, like this:
UNRESOLVED: gdb.base/example.exp: This test script does not work on a remote host.
The testsuite is entirely contained in gdb/testsuite. While the
testsuite includes some makefiles and configury, these are very minimal,
and used for little besides cleaning up, since the tests themselves
handle the compilation of the programs that gdb will run. The file
testsuite/lib/gdb.exp contains common utility procs useful for
all gdb tests, while the directory testsuite/config contains
configuration-specific files, typically used for special-purpose
definitions of procs like gdb_load
and gdb_start
.
The tests themselves are to be found in testsuite/gdb.* and subdirectories of those. The names of the test files must always end with .exp. DejaGNU collects the test files by wildcarding in the test directories, so both subdirectories and individual files get chosen and run in alphabetical order.
The following table lists the main types of subdirectories and what they are for. Since DejaGNU finds test files no matter where they are located, and since each test file sets up its own compilation and execution environment, this organization is simply for convenience and intelligibility.
#ifdef
s are allowed if necessary, for instance
for prototypes).
In many areas, the gdb tests are already quite comprehensive; you should be able to copy existing tests to handle new cases.
You should try to use gdb_test
whenever possible, since it
includes cases to handle all the unexpected errors that might happen.
However, it doesn't cost anything to add new test procedures; for
instance, gdb.base/exprs.exp defines a test_expr
that
calls gdb_test
multiple times.
Only use send_gdb
and gdb_expect
when absolutely
necessary, such as when gdb has several valid responses to a command.
The source language programs do not need to be in a consistent style. Since gdb is used to debug programs written in many different styles, it's worth having a mix of styles in the testsuite; for instance, some gdb bugs involving the display of source lines would never manifest themselves if the programs used GNU coding style uniformly.
Check the README file, it often has useful information that does not appear anywhere else in the directory.
gdb is a large and complicated program, and if you first starting to work on it, it can be hard to know where to start. Fortunately, if you know how to go about it, there are ways to figure out what is going on.
This manual, the gdb Internals manual, has information which applies generally to many parts of gdb.
Information about particular functions or data structures are located in comments with those functions or data structures. If you run across a function or a global variable which does not have a comment correctly explaining what is does, this can be thought of as a bug in gdb; feel free to submit a bug report, with a suggested comment if you can figure out what the comment should say. If you find a comment which is actually wrong, be especially sure to report that.
Comments explaining the function of macros defined in host, target, or native dependent files can be in several places. Sometimes they are repeated every place the macro is defined. Sometimes they are where the macro is used. Sometimes there is a header file which supplies a default definition of the macro, and the comment is there. This manual also documents all the available macros.
Start with the header files. Once you have some idea of how gdb's internal symbol tables are stored (see symtab.h, gdbtypes.h), you will find it much easier to understand the code which uses and creates those symbol tables.
You may wish to process the information you are getting somehow, to enhance your understanding of it. Summarize it, translate it to another language, add some (perhaps trivial or non-useful) feature to gdb, use the code to predict what a test case would do and write the test case and verify your prediction, etc. If you are reading code and your eyes are starting to glaze over, this is a sign you need to use a more active approach.
Once you have a part of gdb to start with, you can find more
specifically the part you are looking for by stepping through each
function with the next
command. Do not use step
or you
will quickly get distracted; when the function you are stepping through
calls another function try only to get a big-picture understanding
(perhaps using the comment at the beginning of the function being
called) of what it does. This way you can identify which of the
functions being called by the function you are stepping through is the
one which you are interested in. You may need to examine the data
structures generated at each stage, with reference to the comments in
the header files explaining what the data structures are supposed to
look like.
Of course, this same technique can be used if you are just reading the code, rather than actually stepping through it. The same general principle applies—when the code you are looking at calls something else, just try to understand generally what the code being called does, rather than worrying about all its details.
A good place to start when tracking down some particular area is with
a command which invokes that feature. Suppose you want to know how
single-stepping works. As a gdb user, you know that the
step
command invokes single-stepping. The command is invoked
via command tables (see command.h); by convention the function
which actually performs the command is formed by taking the name of
the command and adding `_command', or in the case of an
info
subcommand, `_info'. For example, the step
command invokes the step_command
function and the info
display
command invokes display_info
. When this convention is
not followed, you might have to use grep
or M-x
tags-search in emacs, or run gdb on itself and set a
breakpoint in execute_command
.
If all of the above fail, it may be appropriate to ask for information
on bug-gdb
. But never post a generic question like “I was
wondering if anyone could give me some tips about understanding
gdb”—if we had some magic secret we would put it in this manual.
Suggestions for improving the manual are always welcome, of course.
If gdb is limping on your machine, this is the preferred way to get it fully functional. Be warned that in some ancient Unix systems, like Ultrix 4.2, a program can't be running in one process while it is being debugged in another. Rather than typing the command ./gdb ./gdb, which works on Suns and such, you can copy gdb to gdb2 and then type ./gdb ./gdb2.
When you run gdb in the gdb source directory, it will read a
.gdbinit file that sets up some simple things to make debugging
gdb easier. The info
command, when executed without a subcommand
in a gdb being debugged by gdb, will pop you back up to the top level
gdb. See .gdbinit for details.
If you use emacs, you will probably want to do a make TAGS
after
you configure your distribution; this will put the machine dependent
routines for your local machine where they will be accessed first by
M-.
Also, make sure that you've either compiled gdb with your local cc, or
have run fixincludes
if you are compiling with gcc.
Thanks for thinking of offering your changes back to the community of gdb users. In general we like to get well designed enhancements. Thanks also for checking in advance about the best way to transfer the changes.
The gdb maintainers will only install “cleanly designed” patches. This manual summarizes what we believe to be clean design for gdb.
If the maintainers don't have time to put the patch in when it arrives, or if there is any question about a patch, it goes into a large queue with everyone else's patches and bug reports.
The legal issue is that to incorporate substantial changes requires a
copyright assignment from you and/or your employer, granting ownership
of the changes to the Free Software Foundation. You can get the
standard documents for doing this by sending mail to gnu@gnu.org
and asking for it. We recommend that people write in "All programs
owned by the Free Software Foundation" as "NAME OF PROGRAM", so that
changes in many programs (not just gdb, but GAS, Emacs, GCC,
etc) can be
contributed with only one piece of legalese pushed through the
bureaucracy and filed with the FSF. We can't start merging changes until
this paperwork is received by the FSF (their rules, which we follow
since we maintain it for them).
Technically, the easiest way to receive changes is to receive each
feature as a small context diff or unidiff, suitable for patch
.
Each message sent to me should include the changes to C code and
header files for a single feature, plus ChangeLog entries for
each directory where files were modified, and diffs for any changes
needed to the manuals (gdb/doc/gdb.texinfo or
gdb/doc/gdbint.texinfo). If there are a lot of changes for a
single feature, they can be split down into multiple messages.
In this way, if we read and like the feature, we can add it to the sources with a single patch command, do some testing, and check it in. If you leave out the ChangeLog, we have to write one. If you leave out the doc, we have to puzzle out what needs documenting. Etc., etc.
The reason to send each change in a separate message is that we will not install some of the changes. They'll be returned to you with questions or comments. If we're doing our job correctly, the message back to you will say what you have to fix in order to make the change acceptable. The reason to have separate messages for separate features is so that the acceptable changes can be installed while one or more changes are being reworked. If multiple features are sent in a single message, we tend to not put in the effort to sort out the acceptable changes from the unacceptable, so none of the features get installed until all are acceptable.
If this sounds painful or authoritarian, well, it is. But we get a lot of bug reports and a lot of patches, and many of them don't get installed because we don't have the time to finish the job that the bug reporter or the contributor could have done. Patches that arrive complete, working, and well designed, tend to get installed on the day they arrive. The others go into a queue and get installed as time permits, which, since the maintainers have many demands to meet, may not be for quite some time.
Please send patches directly to the gdb maintainers.
Fragments of old code in gdb sometimes reference or set the following configuration macros. They should not be used by new code, and old uses should be removed as those parts of the debugger are otherwise touched.
STACK_END_ADDR
Any foo-xdep.c file that references STACK_END_ADDR is so old that it has never been converted to use BFD. Now that's old!
An observer is an entity which is interested in being notified when GDB reaches certain states, or certain events occur in GDB. The entity being observed is called the subject. To receive notifications, the observer attaches a callback to the subject. One subject can have several observers.
observer.c implements an internal generic low-level event notification mechanism. This generic event notification mechanism is then re-used to implement the exported high-level notification management routines for all possible notifications.
The current implementation of the generic observer provides support for contextual data. This contextual data is given to the subject when attaching the callback. In return, the subject will provide this contextual data back to the observer as a parameter of the callback.
Note that the current support for the contextual data is only partial, as it lacks a mechanism that would deallocate this data when the callback is detached. This is not a problem so far, as this contextual data is only used internally to hold a function pointer. Later on, if a certain observer needs to provide support for user-level contextual data, then the generic notification mechanism will need to be enhanced to allow the observer to provide a routine to deallocate the data when attaching the callback.
The observer implementation is also currently not reentrant. In particular, it is therefore not possible to call the attach or detach routines during a notification.
Observer notifications can be traced using the command `set debug observer 1' (see Optional messages about internal happenings).
normal_stop
Notifications
gdb notifies all normal_stop
observers when the
inferior execution has just stopped, the associated messages and
annotations have been printed, and the control is about to be returned
to the user.
Note that the normal_stop
notification is not emitted when
the execution stops due to a breakpoint, and this breakpoint has
a condition that is not met. If the breakpoint has any associated
commands list, the commands are executed after the notification
is emitted.
The following interfaces are available to manage observers:
Using the function f, create an observer that is notified when ever event occures, return the observer.
Remove observer from the list of observers to be notified when event occurs.
The following observable events are defined:
The target's register contents have changed.
The executable being debugged by GDB has changed: The user decided to debug a different program, or the program he was debugging has been modified since being loaded by the debugger (by being recompiled, for instance).
gdb has just connected to an inferior. For `run', gdb calls this observer while the inferior is still stopped at the entry-point instruction. For `attach' and `core', gdb calls this observer immediately after connecting to the inferior, and before any information on the inferior has been printed.
The shared library specified by solib has been loaded. Note that when gdb calls this observer, the library's symbols probably haven't been loaded yet.
The shared library specified by solib has been unloaded.
Copyright © 2000,2001,2002 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of “copyleft”, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The “Document”, below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as “you”. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A “Modified Version” of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A “Secondary Section” is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The “Invariant Sections” are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The “Cover Texts” are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A “Transparent” copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not “Transparent” is called “Opaque”.
Examples of suitable formats for Transparent copies include plain ascii without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The “Title Page” means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, “Title Page” means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
A section “Entitled XYZ” means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To “Preserve the Title” of such a section when you modify the Document means that it remains a section “Entitled XYZ” according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled “Endorsements”, provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled “History” in the various original documents, forming one section Entitled “History”; likewise combine any sections Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You must delete all sections Entitled “Endorsements.”
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an “aggregate” if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “History”, the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License “or any later version” applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:
Copyright (C) year your name. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ``GNU Free Documentation License''.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this:
with the Invariant Sections being list their titles, with the Front-Cover Texts being list, and with the Back-Cover Texts being list.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
*ADDRESS_CLASS_TYPE_FLAGS_TO_NAME
: Target Architecture Definition*gdbarch_data
: Coding_initialize_language
: Language Supporta.out
format: Symbol Handlingadd_cmd
: User Interfaceadd_com
: User Interfaceadd_setshow_cmd
: User Interfaceadd_setshow_cmd_full
: User Interfaceadd_symtab_fns
: Symbol HandlingADDR_BITS_REMOVE
: Target Architecture DefinitionADDRESS_CLASS_NAME_TO_TYPE_FLAGS
: Target Architecture DefinitionADDRESS_CLASS_NAME_to_TYPE_FLAGS
: Target Architecture DefinitionADDRESS_CLASS_NAME_TO_TYPE_FLAGS_P
: Target Architecture DefinitionADDRESS_CLASS_TYPE_FLAGS
: Target Architecture DefinitionADDRESS_CLASS_TYPE_FLAGS (
byte_size,
dwarf2_addr_class)
: Target Architecture DefinitionADDRESS_CLASS_TYPE_FLAGS_P
: Target Architecture DefinitionADDRESS_CLASS_TYPE_FLAGS_TO_NAME
: Target Architecture DefinitionADDRESS_CLASS_TYPE_FLAGS_TO_NAME_P
: Target Architecture DefinitionADDRESS_TO_POINTER
: Target Architecture DefinitionADJUST_BREAKPOINT_ADDRESS
: Target Architecture DefinitionALIGN_STACK_ON_STARTUP
: Host Definitionallocate_symtab
: Language SupportATTR_NORETURN
: Host DefinitionBELIEVE_PCC_PROMOTION
: Target Architecture DefinitionBIG_BREAKPOINT
: Target Architecture DefinitionBITS_BIG_ENDIAN
: Target Architecture DefinitionBPT_VECTOR
: Target Architecture DefinitionBREAKPOINT
: AlgorithmsBREAKPOINT
: Target Architecture DefinitionBREAKPOINT_FROM_PC
: Target Architecture Definitionbug-gdb
mailing list: Getting StartedCALL_DUMMY_LOCATION
: Target Architecture DefinitionCANNOT_FETCH_REGISTER
: Target Architecture DefinitionCANNOT_STEP_HW_WATCHPOINTS
: AlgorithmsCANNOT_STORE_REGISTER
: Target Architecture DefinitionCC_HAS_LONG_LONG
: Host Definitionchar
: Target Architecture DefinitionCHILD_PREPARE_TO_STORE
: Native Debuggingcleanup
: User InterfaceCLEAR_SOLIB
: Native DebuggingCONVERT_REGISTER_P
: Target Architecture Definitioncreate_new_frame
: AlgorithmsCRLF_SOURCE_FILES
: Host Definitioncurrent_language
: Language SupportDEBUG_PTRACE
: Native DebuggingDECR_PC_AFTER_BREAK
: Target Architecture DefinitionDEFAULT_PROMPT
: Host Definitiondeprecate_cmd
: User InterfaceDEPRECATED_BIG_REMOTE_BREAKPOINT
: Target Architecture DefinitionDEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS
: Target Architecture DefinitionDEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS_P
: Target Architecture DefinitionDEPRECATED_FP_REGNUM
: Target Architecture DefinitionDEPRECATED_FRAME_CHAIN
: Target Architecture DefinitionDEPRECATED_FRAME_CHAIN_VALID
: Target Architecture DefinitionDEPRECATED_FRAME_INIT_SAVED_REGS
: Target Architecture DefinitionDEPRECATED_FRAME_SAVED_PC
: Target Architecture DefinitionDEPRECATED_FRAMELESS_FUNCTION_INVOCATION
: Target Architecture DefinitionDEPRECATED_FUNCTION_START_OFFSET
: Target Architecture DefinitionDEPRECATED_GET_SAVED_REGISTER
: Target Architecture DefinitionDEPRECATED_IBM6000_TARGET
: Target Architecture DefinitionDEPRECATED_INIT_EXTRA_FRAME_INFO
: Target Architecture DefinitionDEPRECATED_INIT_FRAME_PC
: Target Architecture DefinitionDEPRECATED_LITTLE_REMOTE_BREAKPOINT
: Target Architecture DefinitionDEPRECATED_POP_FRAME
: Target Architecture DefinitionDEPRECATED_PUSH_ARGUMENTS.
: Target Architecture DefinitionDEPRECATED_REG_STRUCT_HAS_ADDR
: Target Architecture DefinitionDEPRECATED_REGISTER_RAW_SIZE
: Target Architecture DefinitionDEPRECATED_REGISTER_VIRTUAL_SIZE
: Target Architecture DefinitionDEPRECATED_REMOTE_BREAKPOINT
: Target Architecture DefinitionDEPRECATED_SIGTRAMP_END
: Target Architecture DefinitionDEPRECATED_SIGTRAMP_START
: Target Architecture DefinitionDEPRECATED_STACK_ALIGN
: Target Architecture DefinitionDEPRECATED_USE_STRUCT_CONVENTION
: Target Architecture DefinitionDEV_TTY
: Host DefinitionDIRNAME_SEPARATOR
: CodingDISABLE_UNSETTABLE_BREAK
: Target Architecture Definitiondiscard_cleanups
: Codingdo_cleanups
: CodingDWARF2_REG_TO_REGNUM
: Target Architecture DefinitionDWARF_REG_TO_REGNUM
: Target Architecture DefinitionECOFF_REG_TO_REGNUM
: Target Architecture DefinitionEND_OF_TEXT_DEFAULT
: Target Architecture Definitionevaluate_subexp
: Language Supportexecutable_changed
: GDB ObserversEXTRACT_RETURN_VALUE
: Target Architecture Definitionextract_typed_address
: Target Architecture Definitionfetch_core_registers
: Native DebuggingFETCH_INFERIOR_REGISTERS
: Native DebuggingFILENAME_CMP
: Codingfind_pc_function
: Symbol Handlingfind_pc_line
: Symbol Handlingfind_sym_fns
: Symbol Handlinggdbarch
structure: Target Architecture DefinitionFOPEN_RB
: Host DefinitionFP0_REGNUM
: Native Debuggingframe_align
: Target Architecture DefinitionFRAME_FP
: AlgorithmsFRAME_NUM_ARGS
: Target Architecture Definitionframe_pop
: Target Architecture DefinitionFUNCTION_EPILOGUE_SIZE
: Target Architecture DefinitionGCC2_COMPILED_FLAG_SYMBOL
: Target Architecture DefinitionGCC_COMPILED_FLAG_SYMBOL
: Target Architecture Definition_MULTI_ARCH
: Target Architecture Definitiongdb_osabi
: Target Architecture DefinitionGDB_OSABI_ARM_APCS
: Target Architecture DefinitionGDB_OSABI_ARM_EABI_V1
: Target Architecture DefinitionGDB_OSABI_ARM_EABI_V2
: Target Architecture DefinitionGDB_OSABI_FREEBSD_AOUT
: Target Architecture DefinitionGDB_OSABI_FREEBSD_ELF
: Target Architecture DefinitionGDB_OSABI_GO32
: Target Architecture DefinitionGDB_OSABI_HURD
: Target Architecture DefinitionGDB_OSABI_LINUX
: Target Architecture DefinitionGDB_OSABI_NETBSD_AOUT
: Target Architecture DefinitionGDB_OSABI_NETBSD_ELF
: Target Architecture DefinitionGDB_OSABI_NETWARE
: Target Architecture DefinitionGDB_OSABI_OSF1
: Target Architecture DefinitionGDB_OSABI_SOLARIS
: Target Architecture DefinitionGDB_OSABI_SVR4
: Target Architecture DefinitionGDB_OSABI_UNKNOWN
: Target Architecture DefinitionGDB_OSABI_WINCE
: Target Architecture Definition_TARGET_IS_HPPA
: Target Architecture Definitiongdbarch_data
: Codinggdbarch_in_function_epilogue_p
: Target Architecture Definitiongdbarch_init_osabi
: Target Architecture Definitiongdbarch_register_osabi
: Target Architecture Definitiongdbarch_register_osabi_sniffer
: Target Architecture Definitiongdbarch_return_value
: Target Architecture DefinitionINIT_FILENAME
: Host DefinitionGET_LONGJMP_TARGET
: Native DebuggingGET_LONGJMP_TARGET
: AlgorithmsGET_LONGJMP_TARGET
: Target Architecture DefinitionHAVE_CONTINUABLE_WATCHPOINT
: AlgorithmsHAVE_DOS_BASED_FILE_SYSTEM
: CodingHAVE_LONG_DOUBLE
: Host DefinitionHAVE_MMAP
: Host DefinitionHAVE_NONSTEPPABLE_WATCHPOINT
: AlgorithmsHAVE_STEPPABLE_WATCHPOINT
: AlgorithmsHAVE_TERMIO
: Host Definitioni386_cleanup_dregs
: AlgorithmsI386_DR_LOW_GET_STATUS
: AlgorithmsI386_DR_LOW_RESET_ADDR
: AlgorithmsI386_DR_LOW_SET_ADDR
: AlgorithmsI386_DR_LOW_SET_CONTROL
: Algorithmsi386_insert_hw_breakpoint
: Algorithmsi386_insert_watchpoint
: Algorithmsi386_region_ok_for_watchpoint
: Algorithmsi386_remove_hw_breakpoint
: Algorithmsi386_remove_watchpoint
: Algorithmsi386_stopped_by_hwbp
: Algorithmsi386_stopped_by_watchpoint
: Algorithmsi386_stopped_data_address
: AlgorithmsI386_USE_GENERIC_WATCHPOINTS
: AlgorithmsIN_SOLIB_CALL_TRAMPOLINE
: Target Architecture DefinitionIN_SOLIB_DYNSYM_RESOLVE_CODE
: Target Architecture DefinitionIN_SOLIB_RETURN_TRAMPOLINE
: Target Architecture Definitioninferior_created
: GDB ObserversINNER_THAN
: Target Architecture DefinitionINT_MAX
: Host DefinitionINT_MIN
: Host DefinitionINTEGER_TO_ADDRESS
: Target Architecture DefinitionIS_ABSOLUTE_PATH
: CodingIS_DIR_SEPARATOR
: CodingISATTY
: Host DefinitionKERNEL_U_ADDR
: Native DebuggingKERNEL_U_ADDR_HPUX
: Native DebuggingL_SET
: Host Definitionlength_of_subexp
: Language Supportlibgdb
: libgdblibiberty
library: Support Librarieslint
: Host DefinitionLITTLE_BREAKPOINT
: Target Architecture Definitionlong long
data type: Host DefinitionLONG_MAX
: Host DefinitionLONGEST
: Host Definitionlongjmp
debugging: AlgorithmsLSEEK_NOT_LINEAR
: Host Definitionmake_cleanup
: CodingMEMORY_INSERT_BREAKPOINT
: Target Architecture DefinitionMEMORY_REMOVE_BREAKPOINT
: Target Architecture Definitionmmap
: Host DefinitionNAME_OF_MALLOC
: Target Architecture DefinitionNATDEPFILES
: Native Debuggingui_out
functions: User InterfaceNO_HIF_SUPPORT
: Target Architecture DefinitionNO_STD_REGS
: Host DefinitionNORETURN
: Host Definitionnormal_stop
: GDB Observersnormal_stop
observer: GDB Observersobstacks
: Support LibrariesONE_PROCESS_WRITETEXT
: Native Debuggingop_print_tab
: Language SupportOS9K_VARIABLES_INSIDE_BLOCK
: Target Architecture DefinitionPARM_BOUNDARY
: Target Architecture Definitionparse_exp_1
: Language SupportPC_LOAD_SEGMENT
: Target Architecture DefinitionPC_REGNUM
: Target Architecture DefinitionPOINTER_TO_ADDRESS
: Target Architecture Definitionprefixify_subexp
: Language SupportPRINT_FLOAT_INFO
: Target Architecture Definitionprint_registers_info
: Target Architecture Definitionprint_subexp
: Language SupportPRINT_VECTOR_INFO
: Target Architecture DefinitionPRINTF_HAS_LONG_DOUBLE
: Host DefinitionPRINTF_HAS_LONG_LONG
: Host DefinitionPROC_NAME_FMT
: Native DebuggingPROCESS_LINENUMBER_HOOK
: Target Architecture DefinitionPROLOGUE_FIRSTLINE_OVERLAP
: Target Architecture DefinitionPS_REGNUM
: Target Architecture DefinitionPTRACE_ARG3_TYPE
: Native Debuggingpush_dummy_call
: Target Architecture Definitionpush_dummy_code
: Target Architecture Definitionread_fp
: Target Architecture Definitionread_pc
: Target Architecture Definitionread_sp
: Target Architecture DefinitionREGISTER_CONVERT_TO_RAW
: Target Architecture DefinitionREGISTER_CONVERT_TO_TYPE
: Target Architecture DefinitionREGISTER_CONVERT_TO_VIRTUAL
: Target Architecture DefinitionREGISTER_CONVERTIBLE
: Target Architecture DefinitionREGISTER_NAME
: Target Architecture Definitionregister_reggroup_p
: Target Architecture DefinitionREGISTER_TO_VALUE
: Target Architecture Definitionregister_type
: Target Architecture DefinitionREGISTER_U_ADDR
: Native DebuggingREGISTER_VIRTUAL_TYPE
: Target Architecture Definitionregset_from_core_section
: Target Architecture DefinitionREMOTE_BPT_VECTOR
: Target Architecture DefinitionSAVE_DUMMY_FRAME_TOS
: Target Architecture DefinitionSCANF_HAS_LONG_DOUBLE
: Host DefinitionSDB_REG_TO_REGNUM
: Target Architecture DefinitionSEEK_CUR
: Host DefinitionSEEK_SET
: Host DefinitionSHELL_COMMAND_CONCAT
: Native DebuggingSHELL_FILE
: Native DebuggingSIGWINCH_HANDLER
: Host DefinitionSIGWINCH_HANDLER_BODY
: Host DefinitionSKIP_PERMANENT_BREAKPOINT
: Target Architecture DefinitionSKIP_PROLOGUE
: Target Architecture DefinitionSKIP_SOLIB_RESOLVER
: Target Architecture DefinitionSKIP_TRAMPOLINE_CODE
: Target Architecture DefinitionSLASH_STRING
: CodingSOFTWARE_SINGLE_STEP
: Target Architecture DefinitionSOFTWARE_SINGLE_STEP_P
: Target Architecture DefinitionSOFUN_ADDRESS_MAYBE_MISSING
: Target Architecture DefinitionSOLIB_ADD
: Native DebuggingSOLIB_CREATE_INFERIOR_HOOK
: Native Debuggingsolib_loaded
: GDB Observerssolib_unloaded
: GDB ObserversSP_REGNUM
: Target Architecture DefinitionSTAB_REG_TO_REGNUM
: Target Architecture Definitionstabs_argument_has_addr
: Target Architecture DefinitionSTART_INFERIOR_TRAPS_EXPECTED
: Native DebuggingSTEP_SKIPS_DELAY
: Target Architecture DefinitionSTOP_SIGNAL
: Host DefinitionSTOPPED_BY_WATCHPOINT
: AlgorithmsSTORE_RETURN_VALUE
: Target Architecture Definitionstore_typed_address
: Target Architecture Definitionstruct
: GDB Observersstruct value
, converting register contents to: Target Architecture Definitionsym_fns
structure: Symbol HandlingSYMBOL_RELOADING_DEFAULT
: Target Architecture DefinitionSYMBOLS_CAN_START_WITH_DOLLAR
: Target Architecture DefinitionTARGET_CAN_USE_HARDWARE_WATCHPOINT
: Algorithmstarget_changed
: GDB ObserversTARGET_CHAR_BIT
: Target Architecture DefinitionTARGET_CHAR_SIGNED
: Target Architecture DefinitionTARGET_COMPLEX_BIT
: Target Architecture DefinitionTARGET_DOUBLE_BIT
: Target Architecture DefinitionTARGET_DOUBLE_COMPLEX_BIT
: Target Architecture DefinitionTARGET_FLOAT_BIT
: Target Architecture DefinitionTARGET_HAS_HARDWARE_WATCHPOINTS
: Algorithmstarget_insert_hw_breakpoint
: Algorithmstarget_insert_watchpoint
: AlgorithmsTARGET_INT_BIT
: Target Architecture DefinitionTARGET_LONG_BIT
: Target Architecture DefinitionTARGET_LONG_DOUBLE_BIT
: Target Architecture DefinitionTARGET_LONG_LONG_BIT
: Target Architecture DefinitionTARGET_PRINT_INSN
: Target Architecture DefinitionTARGET_PTR_BIT
: Target Architecture DefinitionTARGET_READ_FP
: Target Architecture DefinitionTARGET_READ_PC
: Target Architecture DefinitionTARGET_READ_SP
: Target Architecture DefinitionTARGET_REGION_OK_FOR_HW_WATCHPOINT
: AlgorithmsTARGET_REGION_SIZE_OK_FOR_HW_WATCHPOINT
: Algorithmstarget_remove_hw_breakpoint
: Algorithmstarget_remove_watchpoint
: AlgorithmsTARGET_SHORT_BIT
: Target Architecture Definitiontarget_stopped_data_address
: AlgorithmsTARGET_VIRTUAL_FRAME_POINTER
: Target Architecture DefinitionTARGET_WRITE_PC
: Target Architecture DefinitionTDEPFILES
: Target Architecture Definitiontype
: Target Architecture DefinitionU_REGS_OFFSET
: Native Debuggingui_out
functions: User Interfaceui_out
functions, usage examples: User Interfaceui_out_field_core_addr
: User Interfaceui_out_field_fmt
: User Interfaceui_out_field_fmt_int
: User Interfaceui_out_field_int
: User Interfaceui_out_field_skip
: User Interfaceui_out_field_stream
: User Interfaceui_out_field_string
: User Interfaceui_out_flush
: User Interfaceui_out_list_begin
: User Interfaceui_out_list_end
: User Interfaceui_out_message
: User Interfaceui_out_spaces
: User Interfaceui_out_stream_delete
: User Interfaceui_out_table_begin
: User Interfaceui_out_table_body
: User Interfaceui_out_table_end
: User Interfaceui_out_table_header
: User Interfaceui_out_text
: User Interfaceui_out_tuple_begin
: User Interfaceui_out_tuple_end
: User Interfaceui_out_wrap_hint
: User Interfaceui_stream
: User InterfaceUINT_MAX
: Host DefinitionULONG_MAX
: Host Definitionunwind_dummy_id
: Target Architecture Definitionunwind_pc
: Target Architecture Definitionunwind_sp
: Target Architecture DefinitionUSE_PROC_FS
: Native DebuggingUSG
: Host Definitionui_out
functions: User Interfacevalue_as_address
: Target Architecture Definitionvalue_from_pointer
: Target Architecture DefinitionVALUE_TO_REGISTER
: Target Architecture DefinitionVARIABLES_INSIDE_BLOCK
: Target Architecture Definitionvoid
: GDB Observersvolatile
: Host Definitionwrap_here
: Codingwrite_pc
: Target Architecture Definition[1] The function cast is not portable ISO C.
[2] As of this writing (April 2001),
setting verbosity level is not yet implemented, and is always returned
as zero. So calling ui_out_message
with a verbosity
argument more than zero will cause the message to never be printed.
[3] Some D10V instructions are actually pairs of 16-bit sub-instructions. However, since you can't jump into the middle of such a pair, code addresses can only refer to full 32 bit instructions, which is what matters in this explanation.
[4] The above is from the original example and uses K&R C. gdb has since converted to ISO C but lets ignore that.