Most Be applications have an interactive and graphical user interface. When they start up, they present themselves to the user on-screen in one or more windows. The windows display areas where the user can do something--there may be menus to open, buttons to click, text fields to type in, images to drag, and so on. Each user action on the keyboard or mouse is packaged as an interface message and reported to the application. The application responds to each message as it is received. At least part of the response is always a change in what the window displays--so that users can see the results of their work.
To run this kind of user interface, an application has to do three things. It must:
The application, in effect, carries on a conversation with the user. It draws to present itself on-screen, the user does something with the keyboard or mouse, the event is reported to the application in a message, and the application draws in response, prompting more user actions and more messages.
The Interface Kit structures this interaction with the user. It defines a set of C++ classes that give applications the ability to manage windows, draw in them, and efficiently respond to the user's instructions. Taken together, these classes define a framework for interactive applications. By programming with the Kit, you'll be able to construct an application that effectively uses the capabilities of the BeBox.
This chapter first introduces the conceptual framework for the user interface, then describes all the classes, functions, types, and constants the Kit defines. The reference material that follows this introduction assumes the concepts and terminology presented here.
A graphical user interface is organized around windows. Each window has a particular role to play in an application and is more or less independent of other windows. While working on the computer, users think in terms of windows--what's in them and what can be done with them--perhaps more than in terms of applications.
The design of the software mirrors the way the user interface works: it's also organized around windows. Within an application, each window runs in its own thread and is represented by a separate BWindow object. The object is the application's interface to the window the system provides; the thread is where all the work that's centered on the window takes place.
Because every window has its own thread, the user can, for example, scroll the contents of one window while watching an animation in another, or start a time-consuming computation in an application and still be able to use the application's other windows. A window won't stop working when the user turns to another window.
Commands that the user gives to a particular window initiate activity within that window's thread. When the user clicks a button within a window, for example, everything that happens in response to the click happens in the window thread (unless the application arranges for other threads to be involved). In its interaction with the user, each window acts on its own, independently of other windows.
In a multitasking environment, any number of applications might be running at the same time, each with its own set of windows on-screen. The windows of all running applications must cooperate in a common interface. For example, there can be only one active window at a time--not one per application, but one per machine. A window that comes to the front must jump over every other window, not just those belonging to the same application. When the active window is closed, the window behind it must become active, even if it belongs to a different application.
Because it would be difficult for each application to manage the interaction of its windows with every other application, windows are assigned, at the lowest level, to a separate entity, the Application Server. The Server's principal role in the user interface is to provide applications with the windows they require.
Everything a program or a user does is centered on the windows the Application Server provides. Users type into windows, click buttons in windows, drag images to windows, and so on; applications draw in windows to display the text users type, the buttons they can click, and the images they can drag.
The Application Server, therefore, is the conduit for an application's message input and drawing output:
The Server relieves applications of much of the burden of basic user-interface work. The Interface Kit organizes and further simplifies an application's interaction with the Server.
Every window in an application is represented by a separate BWindow object. Constructing the BWindow establishes a connection to the Application Server--one separate from, but initially dependent on, the connection previously established by the BApplication object. The Server creates a window for the new object and dedicates a separate thread to it.
The BWindow object is a kind of BLooper, so it spawns a thread for the window in the application's address space and begins running a message loop where it receives and responds to interface messages from the Server. The window thread in the application is directly connected to the dedicated thread in the Server.
The BWindow object, therefore, is in position to serve three crucial roles:
All other Interface Kit objects play roles that depend on a BWindow. They draw in a window, respond to interface messages received by a window, or act in support of other objects that draw and respond to messages.
For purposes of drawing and message-handling, a window can be divided up into smaller rectangular areas called views. Each view corresponds to one part of what the window displays--a scroll bar, a document, a list, a button, or some other more or less self- contained portion of the window's contents.
An application sets up a view by constructing a BView object and associating it with a particular BWindow. The BView object is responsible for drawing within the view rectangle, and for handling interface messages directed at that area.
A window is a tablet that can retain and display rendered images, but it can't draw them; for that it needs a set of BViews. A BView is an agent for drawing, but it can't render the images it creates; for that it needs a BWindow. The two kinds of objects work hand in hand.
Each BView object is an autonomous graphics environment for drawing. Some aspects of the environment, such as the list of possible colors, are shared by all BViews and all applications. But within those broad limits, every BView maintains an independent graphics state. It has its own coordinate system, current colors, drawing mode, clipping region, pen position, and so on.
The BView class defines the functions that applications call to carry out elemental drawing tasks--such as stroking lines, filling shapes, drawing characters, and imaging bitmaps. These functions are typically used to implement another function--called Draw()--in a class derived from BView. This view-specific function draws the contents of the view rectangle.
The BWindow will call the BView's Draw() function whenever the window's contents (or at least the part that the BView has control over) need to be updated. A BWindow first asks its BViews to draw when the window is initially placed on-screen. Thereafter, they might be asked to refresh the contents of the window whenever the contents change or when they're revealed after being hidden or obscured. A BView might be called upon to draw at any time.
Because Draw() is called on the command of others, not the BView, it can be considered to draw passively. It presents the view as it currently appears. For example, the Draw() function of a BView that displays editable text would draw the characters that the user had inserted up to that point.
BViews also draw actively in response to messages reporting the user's actions. For example, text is highlighted as the user drags over it and is replaced as the user types. Each change is the result of a system message reported to the BView. For passive drawing, the BView implements a function (Draw()) that others may call. For active drawing, it calls the drawing functions itself (it may even call Draw() ).
The drawing that a BView does is often designed to prompt a user response of some kind--an empty text field with a blinking caret invites typed input, a menu item or a button invites a click, an icon looks like it can be dragged, and so on.
When the user acts, system messages that report the resulting events are sent to the BWindow object, which determines which BView elicited the user action and should respond to it. For example, a BView that draws typed text can expect to respond to messages reporting the user's keystrokes. A BView that draws a button gets to handle the messages that are generated when the button is clicked. The BView class derives from BHandler, so BView objects are eligible to handle messages dispatched by the BWindow.
Just as classes derived from BView implement Draw() functions to draw within the view rectangle, they also implement the hook functions that respond to interface messages. These functions are discussed later, under Hook Functions for Interface Messages .
Largely because of its graphics role and its central role in handling interface messages, BView is the biggest and most diverse class in the Interface Kit. Most other Interface Kit classes are derived from it.
A window typically contains a number of different views--all arranged in a hierarchy beneath the top view, a view that's exactly the same size as the content area of the window. The top view is a companion of the window; it's created by the BWindow object when the BWindow is constructed. When the window is resized, the top view is resized to match. Unlike other views, the top view doesn't draw or respond to messages; it serves merely to connect the window to the views that the application creates and places in the hierarchy.
As illustrated in the diagram below, the view hierarchy can be represented as a branching
tree structure with the top view at its root. All views in the hierarchy (except the top view)
have one, and only one, parent view. Each view (including the top view) can have any
number of child views.
In this diagram, the top view has four children, the container view has three, and the border view one. Child views are located within their parents, so the hierarchy is one of overlapping rectangles. The container view, for example, takes up some of the top view's area and divides its own area into a document view and two scroll bars.
When a new BView object is created, it isn't attached to a window and it has no parent. It's added to a window by making it a child of a view already in the view hierarchy. This is done with the AddChild() function. A view can be made a child of the window's top view by calling BWindow's version of AddChild().
Until it's assigned to a window, a BView can't draw and won't receive reports of events. BViews know how to produce images, but it takes a window to display and retain the images they create.
The view hierarchy determines what's displayed where on-screen, and also how user actions are associated with the responsible BView object:
Although children wait for their parents when it comes time to draw and parents defer to their offspring when it comes to time to respond to interface messages, sibling views are not so well-behaved. Siblings don't draw in any predefined order. This doesn't matter, as long as the view rectangles of the siblings don't overlap. If they do overlap, it's indeterminate which view will draw last--that is, which one will draw on top of the other.
Similarly, it's indeterminate which view will be associated with mouse events in the area the siblings share. It may be one view or it may be the other, and it won't necessarily be the one that drew the image the user sees.
Therefore, it's strongly recommended that sibling views should be arranged so that they don't overlap.
To locate windows and views, draw in them, and report where the cursor is positioned over them, it's necessary to have some conventional way of talking about the display surface. The same conventions are used whether the display device is a monitor that shows images on a screen or a printer that puts them on a page.
In Be software, the display surface is described by a standard two-dimensional coordinate
system where the y-axis extends downward and the x-axis extends to the right, as
illustrated below:
y coordinate values are greater towards the bottom of the display and smaller towards the top, x coordinate values are greater to the right and smaller to the left.
The axes define a continuous coordinate space where distances are measured by floating- point values (floats). All quantities in this space--including widths and heights, x and y coordinates, font sizes, angles, and the size of the pen-- are floating point numbers.
Floating-point coordinates permit precisely stated measurements that can take advantage of display devices with higher resolutions than the screen. For example, a vertical line 0.4 units wide would be displayed using a single column of pixels on-screen, the same as a line 1.4 units wide. However, a 300 dpi printer would use two pixel columns to print the 0.4-unit line and six to print the 1.4-unit line.
A coordinate unit is 1/72 of an inch, roughly equal to a typographical point. However, all screens are considered to have a resolution of 72 pixels per inch (regardless of the actual dimension), so coordinate units count screen pixels. One unit is the distance between the centers of adjacent pixels on-screen.
Specific coordinate systems are associated with the screen, with windows, and with the views inside windows. They differ only in where the two axes are located:
The Interface Kit defines a handful of basic classes for locating points and areas within a coordinate system:
The sides of the rectangle are therefore parallel to the coordinate axes. The left and right sides delimit the range of x coordinate values within the rectangle, and the top and bottom sides delimit the range of y coordinate values. For example, if a rectangle's left top corner is at (0.8, 2.7) and its right bottom corner is at (11.3, 49.5), all points having x coordinates ranging from 0.8 through 11.3 and y coordinates from 2.7 through 49.5 lie inside the rectangle.
If the top of a rectangle is the same as its bottom, or its left the same as its right, the rectangle defines a straight line. If the top and bottom are the same and also the left and right, it collapses to a single point. Such rectangles are still valid--they specify real locations within a coordinate system. However, if the top is greater than the bottom or the left greater than the right, the rectangle is invalid; it has no meaning.
The device-independent coordinate space described above must be mapped to the pixel grid of a particular display device--the screen, a printer, or some other piece of hardware that's capable of rendering an image. For example, to display a rectangle, it's necessary to find the pixel columns that correspond to its right and left sides and the pixel rows that correspond to its top and bottom.
This depends entirely on the resolution of the device. In essence, each device-independent coordinate value must be translated internally to a device-dependent value--an integer index to a particular column or row of pixels. In the coordinate space of the device, one unit equals one pixel.
This translation is easy for the screen, since, as mentioned above, there's a one-to-one correspondence between coordinate units and pixels. It reduces to rounding floating-point coordinates to integers. For other devices, however, the translation means first scaling the coordinate value to a device-specific value, then rounding. For example, the point (12.3, 40.8) would translate to (12, 41) on the screen, but to (51, 170) on a 300 dpi printer.
To map coordinate locations to device-specific pixels, you need to know only two things:
The axes are located in the same place for all devices: The x -axis runs left to right along the middle of a row of pixels and the y-axis runs down the middle of a pixel column. They meet at the very center of a pixel.
Because coordinate units match pixels on the screen, this means that all integral
coordinate values (those without a fractional part) fall midway across a screen pixel. The
following illustration shows where various x coordinate values fall on the
x-axis. The broken lines represent the division of the screen into a pixel grid:
As this illustration shows, it's possible to have coordinate values that lie on the boundary between two pixels. A later section, Picking Pixels to Stroke and Fill , describes how these values are mapped to one pixel or the other.
Drawing is done by BView objects. As discussed above, the views within a window are organized into a hierarchy--there can be views within views--but each view is an independent drawing agent and maintains a separate graphics environment. This section discusses the framework in which BViews draw, beginning with view coordinate systems. Detailed descriptions of the functions mentioned here can be found in the BView and BWindow class descriptions.
As a convenience, each view is assigned a coordinate system of its own. By default, the coordinate origin--(0.0, 0.0)--is located at the left top corner of the view rectangle. (For an overview of the coordinate systems assumed by the Interface Kit, see The Coordinate Space above.)
When a view is added as a child of another view, it's located within the coordinate system of its parent. A child is considered part of the contents of the parent view. If the parent moves, the child moves with it; if the parent view scrolls its contents, the child view is shifted along with everything else in the view.
Since each view retains its own internal coordinate system no matter who its parent is, where it's located within the parent, or where the parent is located, a BView's drawing and message-handling code doesn't need to be concerned about anything exterior to itself. To do its work, a BView need look no farther than the boundaries of its own view rectangle.
Although a BView doesn't have to look outside its own boundaries, it does have to know where those boundaries are. It can get this information in two forms:
The illustration below shows a child view 180.0 units wide and 135.0 units high. When
viewed from the outside, from the perspective of its parent's coordinate system, it has a
frame rectangle with left, top, right, and bottom coordinates at 90.0, 60.0, 270.0, and
195.0, respectively. But when viewed from the inside, in the view's own coordinate
system, it has a bounds rectangle with coordinates at 0.0, 0.0, 180.0, and 135.0:
When a view moves to a new location in its parent, its frame rectangle changes but not its bounds rectangle. When a view scrolls its contents, its bounds rectangle changes, but not its frame. The frame rectangle positions the view in the world outside; the bounds rectangle positions the contents inside the view.
Since a BView does its work in its own coordinate system, it refers to the bounds rectangle more often than to the frame rectangle.
A BView scrolls its contents by shifting coordinate values within the view rectangle
--that is, by altering the bounds rectangle. If, for example, the top of a view's bounds rectangle
is at 100.0 and its bottom is at 200.0, scrolling downward 50.0 units would put the top at
150.0 and the bottom at 250.0. Contents of the view with y coordinate values of 150.0 to
200.0, originally displayed in the bottom half of the view, would be shifted to the top half.
Contents with y coordinate values from 200.0 to 250.0, previously unseen, would become
visible at the bottom of the view. This is illustrated below:
Scrolling doesn't move the view--it doesn't alter the frame rectangle --it moves only what's displayed inside the view. In the illustration above, a "data rectangle" encloses everything the BView is capable of drawing. For example, if the view is able to display an entire book, the data rectangle would be large enough to enclose all the lines and pages of the book laid end to end. However, since a BView can draw only within its bounds rectangle, everything in the data rectangle with coordinates that fall outside the bounds rectangle would be invisible. To make unseen data visible, the bounds rectangle must change the coordinates that it encompasses. Scrolling can be thought of as sliding the view's bounds rectangle to a new position on its data rectangle, as is shown in the illustration above. However, as it appears to the user, it's moving the data rectangle under the bounds rectangle. The view doesn't move; the data does.
The Application Server clips the images that a BView produces to the region where it's permitted to draw.
This region is never any larger than the view's bounds rectangle; a view cannot draw outside its bounds. Furthermore, since a child is considered part of its parent, a view can't draw outside the bounds rectangle of its parent either--or, for that matter, outside the bounds rectangle of any ancestor view. In addition, since child views draw after, and therefore logically in front of, their parents, a view concedes some of its territory to its children.
Thus, the visible region of a view is the part of its bounds rectangle that's inside the
bounds rectangles of all its ancestors, minus the frame rectangles of its children. This is
illustrated in the figure below. It shows a hierarchy of three views. The area filled with a
crosshatch pattern is the visible region of view A; it omits the area occupied by its child,
view B. The visible region of view B is colored dark gray; it omits the part of the view that
lies outside its parent. View C has no visible region, for it lies outside the bounds
rectangle of its ancestor, view A:
The visible region of a view might be further restricted if its window is obscured by
another window or if the window it's in lies partially off-screen. The visible region
includes only those areas that are actually visible to the user. For example, if the three
views in the illustration above were in a window that was partially blocked by another
window, their visible regions might be considerably smaller. This is illustrated below:
Note that in this case, view A has a discontinuous visible region.
The Application Server clips the drawing that a view does to a region that's never any larger than the visible region. On occasion, it may be smaller. For the sake of efficiency, while a view is being automatically updated, the clipping region excludes portions of the visible region that don't need to be redrawn:
An application can also limit the clipping region for a view by passing a BRegion object to ConstrainClippingRegion() . The clipping region won't include any areas that aren't in the region passed. The Application Server calculates the clipping region as it normally would, but intersects it with the specified region.
You can obtain the current clipping region for a view by calling GetClippingRegion() . (See also the BRegion class description.)
Every view has a basic, underlying color. It's the color that fills the view rectangle before the BView does any drawing. The user may catch a glimpse of this color when the view is first shown on-screen, when it's resized larger, and when it's erased in preparation for an update. It will also be seen wherever the BView fails to draw in the visible region.
In a sense, the view color is the canvas on which the BView draws. It doesn't enter into any of the object's drawing operations except to provide a background. Although it's one of the BView's graphics parameters, it's not one that any drawing functions refer to.
By default, the view color is white. You can assign a different color to a view by calling BView's SetViewColor() function. If you set the color to B_TRANSPARENT_32_BIT, the Application Server won't erase the view's clipping region before an update. This is appropriate only if the view erases itself by touching every pixel in the clipping region when it draws.
Views draw through a set of primitive functions such as:
The way these functions work depends not only on the values that they're passed --the particular string, bitmap, arc, or ellipse that's to be drawn--but on previously set values in the BView's graphics environment.
Each BView object maintains its own graphics environment for drawing. The view color, coordinate system, and clipping region are fundamental parts of that environment, but not the only parts. It also includes a number of parameters that can be set and reset at will to affect the next image drawn. These parameters are:
(The high and low colors roughly match what other systems call the fore and back, or foreground and background, colors. However, neither color truly represents the color of the foreground or background. The terminology "high" and "low" is meant to keep the sense of two opposing colors and to match how they're defined in a pattern. A pattern bit is turned on for the high color and turned off for the low color. See the SetHighColor() and SetLowColor() functions and the Patterns section below.)
By default, a BView's graphics parameters are set to the following values:
Font | Kate (a 9-point bitmap font, no rotation, 90 degree shear) |
Symbol Set | Macintosh |
Pen position | (0.0, 0.0) |
Pen size | 1.0 coordinate units |
High color | Black (red, green, and blue components all equal to 0) |
Low color | White (red, green, and blue components all equal to 255) |
Drawing mode | Copy mode (B_OP_COPY) |
View color | White (red, green, and blue components all equal to 255) |
Clipping region | The visible region of the view |
Coordinate system | Origin at the left top corner of the bounds rectangle |
However, as the next section, Views and the Server , explains, these values take effect only when the BView is assigned to a window.
The pen is a fiction that encompasses two properties of a view's graphics environment: the current drawing location and the thickness of stroked lines.
The pen location determines where the next image will be drawn--but only if another location isn't explicitly passed to the drawing function. Some drawing functions alter the pen location--as if the pen actually moves as it does the drawing--but usually it's set by calling MovePenBy() or MovePenTo() .
The pen that draws lines (through the various Stroke ...() functions) has a malleable tip that can be made broader or narrower by calling the SetPenSize() function. The larger the pen size, the thicker the line that it draws.
The pen size is expressed in coordinate units, which must be translated to a particular number of pixels for the display device. This is done by scaling the pen size to a device- specific value and rounding to the closest integer. For example, pen sizes of 2.6 and 3.3 would both translate to 3 pixels on-screen, but to 7 and 10 pixels respectively on a 300 dpi printer.
The size is never rounded to 0; no matter how small the pen may be, the line never disappears. If the pen size is set to 0.0, the line will be as thin as possible --it will be drawn using the fewest possible pixels on the display device. (In other words, it will be rounded to 1 for all devices.)
If the pen size translates to a tip that's broader than one pixel, the line is drawn with the tip centered on the path of the line. Roughly the same number of pixels are colored on both sides of the path.
A later section, Picking Pixels to Stroke and Fill , illustrates how pens of different sizes choose the pixels to be colored.
The high and low colors are specified as rgb_color values--full 32-bit values with separate red, green, and blue color components, plus an alpha component for transparency. Although there may sometimes be limitations on the colors that can be rendered on- screen, there are no restrictions on the colors that can be specified.
The way colors are specified for a bitmap depends on the color space in which they're interpreted. The color space determines the depth of the bitmap data (how many bits of information are stored for each pixel) and its interpretation (whether the data represents shades of gray or true colors, whether it's segmented into color components, what the components are, how they're arranged, and so on). Five possible color spaces are recognized:
B_MONOCHROME_1_BIT | One bit of data per pixel, where 1 is black and 0 is white. |
B_GRAYSCALE_8_BIT | Eight bits of data per pixel, where a value of 255 is black and 0 is white. |
B_COLOR_8_BIT | Eight bits of data per pixel, interpreted as an index into a list of 256 colors. The list is part of the system color map, and is the same for all applications. |
B_RGB_16_BIT | < This color space is currently undefined. > |
B_RGB_32_BIT | Four components of data per pixel--red, green, blue, and alpha--with eight bits per component. A component value of 255 yields the maximum amount of red, green, or blue, and a value of 0 indicates the absence of that color. < The alpha component is currently ignored. It will specify the coverage of the color--how transparent or opaque it is. > |
The components in the B_RGB_32_BIT color space are meshed rather than separated into distinct planes; all four components are specified for the first pixel before the four components for the second pixel, and so on. Unlike an rgb_color , the color components are arranged in reverse order--blue, green, red--followed by alpha. This is the natural order for many display devices. |
The screen can be configured to display colors in either the B_COLOR_8_BIT color space or the B_RGB_32_BIT color space. When it's in the B_COLOR_8_BIT color space, specified rgb_colors are displayed as the closest 8-bit color in the color list. (See the BBitmap class and the system_colors() global function.)
Functions that stroke a line or fill a closed shape don't draw directly in either the high or the low color. Rather they take a pattern, an arrangement of one or both colors that's repeated over the entire surface being drawn.
By combining the low color with the high color, patterns can produce dithered colors that lie somewhere between two hues in the B_COLOR_8_BIT color space. Patterns also permit drawing with less than the solid high color (for intermittent or broken lines, for example) and can take advantage of drawing modes that treat the low color as if it were transparent, as discussed below.
A pattern is defined as an 8-pixel by 8-pixel square. The pattern type is 8 bytes long, with one byte per row and one bit per pixel. Rows are specified from top to bottom and pixels from left to right. Bits marked 1 designate the high color; those marked 0 designate the low color. For example, a pattern of wide diagonal stripes could be defined as follows:
pattern stripes = { 0xc7, 0x8f, 0x1f, 0x3e, 0x7c, 0xf8, 0xf1, 0xe3 };
Patterns repeat themselves across the screen, like tiles that are laid side by side. The
pattern defined above looks like this:
The dotted lines in this illustration show the separation of the screen into pixels. The thicker black line outlines one 8-by-8 square that the pattern defines.
The outline of the shape being filled or the width of the line being stroked determines
where the pattern is revealed. It's as if the screen was covered with the pattern just below
the surface, and stroking or filling allowed some of it to show through. For example,
stroking a one-pixel wide horizontal path in the pattern illustrated above would result in a
dotted line, with the dashes (in the high color) slightly longer than the spaces between (in
the low color):
When stroking a line or filling a shape, the pattern serves as the source image for the current drawing mode, as explained under Drawing Modes below. The nature of the mode determines how the pattern interacts with the destination image, the image already in place.
The Interface Kit defines three patterns:
B_SOLID_HIGH is the default pattern for all drawing functions. Applications can define as many other patterns as they need.
When a BView draws, it in effect transfers an image to a target location somewhere in the view rectangle. The drawing mode determines how the image being transferred interacts with the image already in place at that location. The image being transferred is known as the source image; it might be a bitmap or a pattern of some kind. The image already in place is known as the destination image.
In the simplest and most straightforward kind of drawing, the source image is simply painted on top of the destination; the source replaces the destination. However, there are other possibilities. There are nine different drawing modes--nine distinct ways of combining the source and destination images. The modes are designated by drawing_mode constants that can be passed to SetDrawingMode() :
B_OP_COPY | B_OP_MIN | B_OP_ADD |
B_OP_OVER | B_OP_MAX | B_OP_SUBTRACT |
B_OP_ERASE | B_OP_INVERT | B_OP_BLEND |
B_OP_COPY is the default mode and the simplest. It transfers the source image to the destination, replacing whatever was there before. The destination is ignored.
In the other modes, however, some of the destination might be preserved, or the source and destination might be combined to form a result that's different from either of them. For these modes, it's convenient to think of the source image as an image that exists somewhere independent of the destination location, even though it's not actually visible. It's the image that would be rendered at the destination in B_OP_COPY mode.
The modes work for all BView drawing functions--including those that stroke lines and fill shapes, those that draw characters, and those that image bitmaps. The way they work depends foremost on the nature of the source image--whether it's a pattern or a bitmap. For the Fill...() and Stroke...() functions, the source image is a pattern that has the same shape as the area being filled or the area the pen touches as it strokes a line. For DrawBitmap() , the source image is a rectangular bitmap.
The way the drawing modes work also depends on the color space of the source image and the color space of the destination. The following discussion concentrates on drawing where the source and destination both contain colors. This is the most common case, and also the one that's most general.
When applied to colors, the nine drawing modes fall naturally into four groups:
The following paragraphs describe each of these groups in turn.
In B_OP_COPY mode, the source image replaces the destination. This is the default drawing mode and the one most commonly used. Because this mode doesn't have to test for particular color values in the source image, look at the colors in the destination, or compute colors in the result, it's also the fastest of the modes.
If the source image contains transparent pixels, their transparency will be retained in the result; the transparent value is copied just like any other color. However, the appearance of a transparent pixel when shown on-screen is indeterminate. If a source image has transparent portions, it's best to transfer it to the screen in B_OP_OVER or another mode. In all modes other than B_OP_COPY, a transparent pixel in a source bitmap preserves the color of the corresponding destination pixel.
Three drawing modes--B_OP_OVER, B_OP_ERASE, and B_OP_INVERT--are designed specifically to make use of transparency in the source image; they're able to preserve some of the destination image. In these modes (and only these modes) the low color in a source pattern acts just like transparency in a source bitmap.
By masking out the unwanted parts of a rectangular bitmap with transparent pixels, this mode can place an irregularly shaped source image on top of a background image. Transparency in the source foreground lets the destination background show through. The versatility of B_OP_OVER makes it the second most commonly used mode, after B_OP_COPY .
Although this mode can be used for selective erasing, it's simpler to erase by filling an area with the B_SOLID_LOW pattern in B_OP_COPY mode.
These three modes also work for monochrome images. If the source image is monochrome, the distinction between source bitmaps and source patterns breaks down. Two rules apply:
Three drawing modes--B_OP_ADD, B_OP_SUBTRACT, and B_OP_BLEND--combine the source and destination images, pixel by pixel, and color component by color component. As in most of the other modes, transparency in a source bitmap preserves the destination image in the result. Elsewhere, the result is a combination of the source and destination. The high and low colors of a source pattern aren't treated in any special way; they're handled just like other colors.
By adding a uniform gray to each pixel in the destination, for example, the whole destination image can be brightened by a constant amount.
For example, by subtracting a uniform amount from the red component of each pixel in the destination, the whole image can be made less red.
These modes work only for color images, not for monochrome ones. If the source or destination is specified in the B_COLOR_8_BIT color space, the color will be expanded to a full B_RGB_32_BIT value to compute the result; the result is then contracted to the closest color in the B_COLOR_8_BIT color space.
Two drawing modes--B_OP_MAX and B_OP_MIN--compare each pixel in the source image to the corresponding pixel in the destination image and select one to keep in the result. If the source pixel is transparent, both modes select the destination pixel. Otherwise, B_OP_MIN selects the darker of the two colors and B_OP_MAX selects the brighter of the two. If the source image is a uniform shade of gray, for example, B_OP_MAX would substitute that shade for every pixel in the destination image that was darker than the gray.
Like the blending modes, B_OP_MIN and B_OP_MAX work only for color images.
Windows lead a dual life--as on-screen entities provided by the Application Server and as BWindow objects in the application. BViews have a similar dual existence--each BView object has a shadow counterpart in the Server. The Server knows the view's location, its place in the window's hierarchy, its visible area, and the current state of its graphics parameters. Because it has this information, the Server can more efficiently associate a user action with a particular view and interpret the BView's drawing instructions.
BWindows become known to the Application Server when they're constructed; creating a BWindow object causes the Server to produce the window that the user will eventually see on-screen. A BView, on the other hand, has no effect on the Server when it's constructed. It becomes known to the Server only when it's attached to a BWindow. The Server must look through the application's windows to see what views it has.
A BView that's not attached to a window therefore lacks a counterpart in the Server. This restricts what some functions can do. Four groups of functions are affected:
Nevertheless, it's possible to assign a value to a graphics parameter before the BView is attached to a window. The value is simply cached until the view becomes part of a window's view hierarchy. It's then set as the current value for the parameter. Values set while the BView belongs to a window change the current value, but not the cached value. Therefore, if the BView is removed from the view hierarchy and reinstated as part of another hierarchy, the last cached value will be reestablished as the current value.
Functions that return graphics parameters report the current value while the BView is attached to a window, and the cached value when it's unattached.
Because of these restrictions, you may find it difficult to complete the initialization of a BView at the time it's constructed. Instead, you may need to wait until the BView receives an AttachedToWindow() notification informing it that it has been added to a window's view hierarchy. This function is called for each view that's added to a window, beginning with the root view being attached, followed by each of its children, and so on down the hierarchy. After all views have been notified with an AttachedToWindow() function call, they each get an AllAttached() notification, but in the reverse order. A parent view that must adjust itself to calculations made by a child view when it's attached to a window can wait until AllAttached() to do the work.
These two function calls are matched by another pair--DetachedFromWindow() and AllDetached() --which notify BViews that they're about to be removed from the window.
The Application Server sends a message to a BWindow whenever any of the views within the window need to be updated. The BWindow then calls the Draw() function of each out-of-date BView so that it can redraw the contents of its on-screen display.
Update messages can arrive at any time. A BWindow receives one whenever:
Update messages take precedence over other kinds of messages. To keep the on-screen display as closely synchronized with event handling as possible, the window acts on update messages as soon as they arrive. They don't need to wait their turn in the message queue.
(Update messages do their work quietly and behind the scenes. You won't find them in the BWindow's message queue, they aren't handled by BWindow's DispatchMessage() function, and they aren't returned by BLooper's CurrentMessage() .)
When a user action or a BView function alters a view in a window-- for example, when a view is resized or its contents are scrolled--the Application Server knows about it. It makes sure that an update message is sent to the window so the view can be redrawn.
However, if code that's specific to your application alters a view, you'll need to inform the Server that the view needs updating. This is done by calling the Invalidate() function. For example, if you write a function that changes the number of elements a view displays, you might invalidate the view after making the change, as follows:
void MyView::SetNumElements(long count) { if ( numElements == count ) return; numElements = count; Invalidate(); }
Invalidate() ensures that the view's Draw() function--which presumably looks at the new value of the numElements data member--will be called automatically.
At times, the update mechanism may be too slow for your application. Update messages arrive just like other messages sent to a window thread, including the interface messages that report events. Although they take precedence over other messages, update messages must wait their turn. The window thread can respond to only one message at a time; it will get the update message only after it finishes with the current one.
Therefore, if your application alters a view and calls Invalidate() while responding to an interface message, the view won't be updated until the response is finished and the window thread is free to turn to the next message. Usually, this is soon enough. But if it's not, if the response to the interface message includes some time-consuming operations, the application can request an immediate update by calling BWindow's UpdateIfNeeded() function.
Just before sending an update message, the Application Server prepares the clipping region of each BView that is about to draw by erasing it to the view background color. Note that only the clipping region is erased, not the entire view, and perhaps not the entire area where the BView will, in fact, draw.
The Server forgoes this step only if the BView's background color is set to the magical B_TRANSPARENT_32_BIT color.
While drawing, a BView may set and reset its graphics parameters any number of times --for example, the pen position and high color might be repeatedly reset so that whatever is drawn next is in the right place and has the right color. These settings are temporary. When the update is over, all graphics parameters are reset to their initial values.
If, for example, Draw() sets the high color to a shade of light blue, as shown below,
SetHighColor(152, 203, 255);
it doesn't mean that the high color will be blue when Draw() is called next. If this line of code is executed during an update, light blue would remain the high color only until the update ends or SetHighColor() is called again, whichever comes first. When the update ends, the previous graphics state, including the previous high color, is restored.
Although you can change most graphics parameters during an update-- move the pen around, reset the font, change the high color, and so on--the coordinate system can't be touched; a view can't be scrolled while it's being updated. Since scrolling causes a view to be updated, scrolling during an update would, in effect, be an attempt to nest one update in another, something that can't logically be done (since updates happen sequentially through messages). If the view's coordinate system were to change, it would alter the current clipping region and confuse the update mechanism.
Graphics parameters that are set outside the context of an update are not limited; they remain in effect until they're explicitly changed. For example, if application code calls Draw(), perhaps in response to an interface message, the parameter values that Draw() last sets would persist even after the function returns. They would become the default values for the view and would be assumed the next time Draw() is called.
Default graphics parameters are typically set as part of initializing the BView once it's attached to a window--in an AttachedToWindow() function. If you want a Draw() function to assume the values set by AttachedToWindow(), it's important to restore those values after any drawing the BView does that's not the result of an update. For example, if a BView invokes SetHighColor() while drawing in response to an interface message, it will need to restore the default high color when done.
If Draw() is called outside of an update, it can't assume that the clipping region will have been erased to the view color, nor can it assume that default graphics parameters will be restored when it's finished.
This section discusses how the various BView Stroke ...() and Fill...() functions pick specific pixels to color. Pixels are chosen after the pen size and all coordinate values have been translated to device-specific units. Device-specific values measure distances by counting pixels; one unit equals one pixel on the device.
A device-specific value can be derived from a coordinate value using a formula that takes the size of a coordinate unit and the resolution of the device into account. For example:
device_value = coordinate_value ( dpi / 72 )
dpi is the resolution of the device in dots (pixels) per inch, 72 is the number of coordinate units in an inch, and device_value is rounded to the closest integer.
To describe where lines and shapes fall on the pixel grid, this section mostly talks about pixel units rather than coordinate units. The accompanying illustrations magnify the grid so that pixel boundaries are clear. As a consequence, they can show only very short lines and small shapes. By blowing up the image, they exaggerate the phenomena they illustrate.
The thinnest possible line is drawn when the pen size translates to 1 pixel on the device. Setting the size to 0.0 coordinate units guarantees a one-pixel pen on all devices.
A one-pixel pen follows the path of the line it strokes and makes the line exactly one pixel
thick at all points. If the line is perfectly horizontal or vertical, it touches just one row or
one column of pixels, as illustrated below. (The grid of broken lines shows the separation
of the display surface into pixels.)
Only pixels that the line path actually passes through are colored to display the line. If a path begins or ends on a pixel boundary, as it does for examples (a) and (b) above, the pixels at the boundary aren't colored unless the path crosses into the pixel. The pen touches the fewest possible number of pixels.
A line path that doesn't enter any pixels, but lies entirely on the boundaries between pixels, colors the pixel row beneath it or the pixel column to its right, as illustrated by (f) and (g) above. A path that reduces to a single point lying on the corner of four pixels, as does (h) above, colors the pixel at its lower right. < However, currently, it's indeterminate which column or row of adjacent pixels would be used to display vertical and horizontal lines like (f) and (g) above. Point (h) would not be visible. >
One-pixel lines that aren't exactly vertical or horizontal touch just one pixel per row or
one per column. If the line is more vertical than horizontal, only one pixel in each row is
used to color the line. If the line is more horizontal than vertical, only one pixel in each
column is used. Some illustrations of slanted one-pixel thick lines are given below:
Although a one-pixel pen touches only pixels that lie on the path it strokes, it won't touch
every pixel that the path crosses if that would mean making the line thicker than specified.
When the path cuts though two pixels in a column or row, but only one of those pixels can
be colored, the one that contains more of the path (the one that contains the midpoint of
the segment cut by the column or row) is chosen. This is illustrated in the close-up below,
which shows where a mostly vertical line crosses one row of pixels:
However, before a choice is made as to which pixel in a row or column to color, the line
path is normalized for the device. For example, if a line is defined by two endpoints, it's
first determined which pixels correspond to those endpoints. The line path is then treated
as if it connected the centers of those pixels. This may alter which pixels get colored, as is
illustrated below. In this illustration, the solid black line is the line path as originally
specified and the broken line is its normalized version:
This normalization is nothing more than the natural consequence of the rounding that occurs when coordinate values are translated to device-specific pixel values.
Although all the diagrams above show straight lines, the principles they illustrate apply equally to curved line paths. A curved path can be treated as if it were made up of a large number of short straight segments.
The following illustration shows how some rectangles, represented by the solid black line,
would be filled with a solid color.
A rectangle includes every pixel that it encloses and every pixel that its sides pass through. However, as rectangle (q) illustrates, it doesn't include pixels that its sides merely touch at the boundary.
If the pixel grid in this illustration represents the screen, rectangle (q) would have left, top, right, and bottom coordinates with fractional values of .5. Rectangle (n), on the other hand, would have coordinates without any fractional parts. Nonfractional coordinates lie at the center of screen pixels.
Rectangle (n), in fact, is the normalized version of all four of the illustrated rectangles. It shows how the sides of the four rectangles would be translated to pixel values. Note that for a rectangle like (q), with edges that fall on pixel boundaries, normalization means rounding the left and top sides upward and rounding the right and bottom sides downward. This follows from the principal that the fewest possible number of pixels should be colored.
Although the four rectangles above differ in size and shape, when filled they all cover a
6 4 pixel area. You can't predict this area from the dimensions of the rectangle.
Because the coordinate space is continuous and x and y values can be located anywhere,
rectangles with different dimensions might have the same rendered size, as shown above,
and rectangles with the same dimensions might have different rendered sizes, as shown
below:
If a one-pixel pen strokes a rectangular path, it touches only pixels that would be included
if the rectangle were filled. The illustration below shows the same rectangles that were
presented above, but strokes them rather than fills them:
Each of the rectangles still covers a 6 4 pixel area. Note that even though the path of rectangle (q) lies entirely on pixel boundaries, pixels below it and to its right are not touched by the pen. The pen touches only pixels that lie within the rectangle.
If a rectangle collapses to a straight line or to a single point, it no longer contains any area. Stroking or filling such a rectangle is equivalent to stroking the line path with a one-pixel pen, as was discussed in the previous section.
The figure below shows a polygon as it would be stroked by a one-pixel pen and as it
would be filled:
The same rules apply when stroking each segment of a polygon as would apply if that segment were an independent line. Therefore, the pen may not touch every pixel the segment passes through.
When the polygon is filled, no additional pixels around its border are colored. As is the case for a rectangle, the displayed shape of filled polygon is identical to the shape of the polygon when stroked with a one-pixel pen. The pen doesn't touch any pixels when stroking the polygon that aren't colored when the polygon is filled. Conversely, filling doesn't color any pixels at the border of the polygon that aren't touched by a one-pixel pen.
A pen that's thicker than one pixel touches the same pixels that a one-pixel pen does, but it
adds extra columns and rows adjacent to the line path. A thick pen tip is, in effect, a linear
brush that's held perpendicular to the line path and kept centered on the line. The
illustration below shows two short lines, each five pixels thick:
The thickness or a vertical or horizontal line can be measured in an exact number of pixels. When the line is slanted, as it is for (t) above, the stroking algorithm tries to make the line visually approximate the thickness of a vertical or horizontal line. In this way, lines retain their shape even when rotated.
When a rectangle is stroked with a thick pen, the corners of the rectangle are filled in, as
shown in the example below:
The BWindow and BView classes together define a structure for responding to user actions on the keyboard and mouse. These actions generate interface messages that are delivered to BWindow objects. The BWindow distributes responsibility for the messages it receives to other objects, typically BViews.
This section describes the messages that report user actions, and the way that BWindow and BView objects are structured to respond to them.
Twenty interface messages are currently defined. Two of them command the window to do something in particular:
All other interface messages report events--something that happened, rather than something that the application must do. In most cases, the message merely reports what the user did on the keyboard or mouse. However, in some cases, the event may reflect the way the Application Server interpreted or handled a user action. The Server might respond directly to the user and pass along an message that indicates what it did --moved a window or changed a value, for example. In a few cases, the event may even reflect what the application thinks the user intended--that is, an application might interpret one or more generic user actions as a more specific event.
The following five messages report atomic user actions on the keyboard and mouse:
The five messages above are all directed at particular views--the view where the cursor is located or where typed input appears. Three others also concern views:
A few messages concern events that affect the window itself:
A few messages report changes to the on-screen environment for a window:
Two messages are produced by the save panel:
Finally, there's one message that doesn't derive from a user action:
An application doesn't have to wait for a message to discover what the user is doing on the keyboard and mouse. Two BView functions, GetKeys() and GetMouse(), can provide an immediate check on the state of these devices.
Interface messages are generated and delivered to the application as the user acts. The Application Server determines which window an action affects and notifies the appropriate window thread. Messages for keyboard events are delivered to the current active window; messages announcing mouse events are sent to the window where the cursor is located.
However, the message is just an intermediary. As soon as it arrives, the BWindow dispatches it to initiate action within the window thread. Typically, one of the BViews associated with the window is asked to respond to the message--usually the BView that drew the image that elicited the user action. But some messages are handled by the BWindow itself.
Interface messages are dispatched by calling a virtual function that's matched to the message. If the message delivers an instruction, the function is named for the action that should be taken. For example, a zoom instruction is dispatched by calling the Zoom() function. If the message reports an event, the function is named for the event. For example, the BView where a mouse-down event occurs is notified with a MouseDown() function call. When the user clicks the close box of a window, generating a quit-requested event, the BWindow's QuitRequested() function is called.
The chart below lists the virtual functions that are called to initiate the application's response to interface messages, and the base classes where the functions are declared. Each application can implement these message-specific functions in a way that's appropriate to its purposes.
Message type | Virtual function | Class |
---|---|---|
B_ZOOM | Zoom() | BWindow |
B_MINIMIZE | Minimize() | BWindow |
B_KEY_DOWN | KeyDown() | BView |
B_KEY_UP | none | |
B_MOUSE_DOWN | MouseDown() | BView |
B_MOUSE_UP | none | |
B_MOUSE_MOVED | MouseMoved() | BView |
B_VIEW_MOVED | FrameMoved() | BView |
B_VIEW_RESIZED | FrameResized() | BView |
B_VALUE_CHANGED | ValueChanged() | BScrollBar |
B_WINDOW_ACTIVATED | WindowActivated() | BWindow and BView |
B_QUIT_REQUESTED | QuitRequested() | BLooper |
B_WINDOW_MOVED | FrameMoved() | BWindow |
B_WINDOW_RESIZED | FrameResized() | BWindow |
B_SCREEN_CHANGED | ScreenChanged() | BWindow |
B_WORKSPACE_ACTIVATED | WorkspaceActivated() | BWindow |
B_WORKSPACES_CHANGED | WorkspacesChanged() | BWindow |
B_SAVE_REQUESTED | SaveRequested() | BWindow |
B_PANEL_CLOSED | SavePanelClosed() | BWindow |
B_PULSE | Pulse() | BView |
< B_KEY_UP messages are currently not produced. > B_MOUSE_UP messages are produced, but they aren't dispatched by calling a virtual function. A BView can determine when a mouse button goes up by calling GetMouse() from within its MouseDown() function. As it reports information about the location of the cursor and the state of the mouse buttons, GetMouse() removes mouse messages from the BWindow's message queue, so the same information won't be reported twice.
A BWindow reinterprets a B_QUIT_REQUESTED message, originally defined for the BLooper class in the Application Kit, to mean a user request to close the window. However, it doesn't redeclare the QuitRequested() hook function that it inherits from BLooper.
Notice, from the chart above, that the BWindow class declares the functions that handle instructions and events directed at the window itself. FrameMoved() is called when the user moves the window, FrameResized() when the user resizes it, WindowActivated() when it becomes, or ceases to be, the active window, Zoom() when it should zoom larger, and so on.
Although the BWindow handles some interface messages, the most common ones --those reporting direct user actions on the keyboard and mouse--are handled by BViews. When the BWindow receives a keyboard or mouse message, it must decide which view is responsible.
This decision is relatively easy for messages reporting mouse events. The cursor points to the affected view. For example, when the user presses a mouse button, the BWindow calls the MouseDown() virtual function of the view under the cursor. When the user moves the mouse, it calls the MouseMoved() function of each view the cursor travels through.
However, there's no cursor attached to the keyboard, so the BWindow object must keep track of the view that's responsible for messages reporting key-down events. That view is known as the focus view.
The focus view is whatever view happens to be displaying the current selection (possibly an insertion point) within the window, or whatever check box, button, or other gadget is currently marked to show that it can be operated from the keyboard.
The focus view is expected to respond to the user's keyboard actions when the window is the active window. When the user presses a key on the keyboard, the BWindow calls the focus view's KeyDown() function. If the focus view displays editable data, it's also expected to handle commands that target the current selection, such as commands to cut, copy, or paste data.
The focus typically doesn't stay on one view all the time; it shifts from view to view. It may change as the user changes the current selection in the window--from text field to text field, for example. Or it changes when the user navigates from one view to another by pressing the Tab key. Only one view in the window can be in focus at a time.
Views put themselves in focus when they're selected by a user action of some kind. For example, when a BView's MouseDown() function is called, notifying it that the user has selected the view, it can grab the focus by calling MakeFocus() . When a BView makes itself the focus view, the previous focus view is notified that it has lost that status.
A view should become the focus view if:
A view should highlight the current selection only while it's in focus.
BViews make themselves the focus view (with the MakeFocus() function), but BWindows report which view is currently in focus (with the CurrentFocus() function).
The focus view gets most keyboard messages, but not all. Three kinds of B_KEY_DOWN messages are conscripted for special tasks:
In all other cases, the BWindow assigns the message to the current focus view.
The BMessage objects that convey interface messages typically contain various kinds of data describing the events they report or clarifying the instructions they give. In most cases, the message contains more information than is passed to the function that starts the application's response. For example, a MouseDown() function is passed the point where the cursor was located when the user pressed the mouse button. But a B_MOUSE_DOWN BMessage also includes information about when the event occurred, what modifier keys the user was holding down at the time, which mouse button was pressed, whether the event counts as a solitary mouse-down, the second event of a double-click, or the third of a triple-click, and so on.
A MouseDown() function can get this information by taking it directly from the BMessage. The BMessage that the window thread is currently responding to can be obtained by calling the CurrentMessage() function, which the BWindow inherits from BLooper. For example, a MouseDown() function might check whether the event is a single-click or the second of a double-click as follows:
void MyView::MouseDown(BPoint point) { long num = Window()->CurrentMessage()->FindLong("clicks"); if ( num == 1 ) { . . . } else if ( num == 2 ) { . . . } . . . }
The Message Protocols appendix lists the contents of all interface messages.
Most information about what the user is doing on the keyboard comes to applications by way of messages reporting key-down events. The application can usually determine what the user's intent was in pressing a key by looking at the character recorded in the message. But, as discussed under B_KEY_DOWN of the Message Protocols appendix, the message carries other keyboard information in addition to the character --the key the user pressed, the modifier states that were in effect at the time, and the current state of all keys on the keyboard.
Some of this information can be obtained in the absence of key-down messages:
This section discusses in detail the kinds of information that you can get about the keyboard through interface messages and these functions.
To talk about the keys on the keyboard, it's necessary first to have a standard way of identifying them. For this purpose, each key is arbitrarily assigned a numerical code.
The illustrations on the next two pages show the key identifiers for a typical keyboard. The codes for the main keyboard are shown on page 49. This diagram shows a standard 101-key keyboard and an alternate version of the bottom row of keys--one that adds a Menu key and left and right Command keys.
The codes for the numerical keypad and for the keys between it and the main keyboard are shown on page 50.
Different keyboards locate keys in slightly different positions. The function keys may be to the left of the main keyboard, for example, rather than along the top. The backslash key (0x33) shows up in various places--sometimes above the Enter key, sometimes next to Shift, and sometimes in the top row (as shown here). No matter where these keys are located, they have the codes indicated in the illustrations.
The BMessage that reports a key-down event contains an entry named "key" for the code
of the key that was pressed.
Keys on the keyboard can be distinguished by the way they behave and by the kinds of information they provide. A principal distinction is between character keys and modifier keys:
If a key doesn't fall into one of these categories or the other, there's nothing for it to do; it has no role to play in the interface. For most keys, the categories are mutually exclusive. Modifier keys are typically not mapped to characters, and character keys don't set modifier states. However, the Scroll Lock key is an exception. It both sets a modifier state and generates a character.
Keys can be distinguished on two other grounds as well:
All keys are repeating keys except for Pause, Break, and the three that set locks (Caps Lock, Num Lock, and Scroll Lock). Even modifier keys like Shift and Control would repeat if they were mapped to characters (but, since they're not, they don't produce any key-down events at all).
Dead keys are dead only when the Option key is held down. They're most appropriate for situations where the user can imagine a character being composed of two distinguishable parts--such as 'a' and 'e' combining to form 'æ'.
The system permits up to five dead keys. By default, they're reserved for combining diacritical marks with other characters. The diacritical marks are the acute and grave accents, dieresis, circumflex, and tilde.
There's a system key map that determines the role that each key plays --whether it's a character key or a modifier key, which modifier states it sets, which characters it produces, whether it's dead or not, how it combines with other keys, and so on. The map is shared by all applications.
Users can modify the key map with the Keyboard utility. Applications can look at it (and perhaps modify it) by calling the system_key_map() global function. See that function on page 327 for details on the structure of the map. The discussion here assumes the default key map that comes with the computer.
The role of a modifier key is to set a temporary, modal state. There are eight modifier states--eight different kinds of modifier key--defined functionally. Three of them affect the character that's reported in a key-down event:
Two modifier keys permit users to give the application instructions from the keyboard:
Three modifiers toggle in and out of locked states:
There are two things to note about these eight modifier states. First, since applications can read the modifiers directly from the messages that report key-down events and obtain them at other times by calling the modifiers() and GetKeys() functions, they are free to interpret the modifier states in any way they desire. They're not tied to the narrow interpretation of, say, the Control key given above. Control, Option, and Shift, for example, often modify the meaning of a mouse event or are used to set other temporary modes of behavior.
Second, the set of modifier states listed above doesn't quite match the keys that are marked on a typical keyboard. A standard 101-key keyboard has left and right "Alt(ernate)" keys, but lacks those labeled "Command," "Option," or "Menu."
The key map must, therefore, bend the standard keyboard to the required modifier states. The default key map does this in three ways:
The illustration below shows the modifier keys on the main keyboard, with labels that
match their functional roles. Users can, of course, remap these keys with the Keyboard
utility. Applications can remap them by calling set_modifier_key() or system_key_map().
Current modifier states are reported in a mask that can be tested against these constants:
B_SHIFT_KEY | B_COMMAND_KEY | B_CAPS_LOCK |
B_CONTROL_KEY | B_MENU_KEY | B_NUM_LOCK |
B_OPTION_KEY | B_SCROLL_LOCK |
The ..._KEY modifiers are set if the user is holding the key down. The ... _LOCK modifiers are set only if the lock is on--regardless of whether the key that sets the lock happens to be up or down at the time.
If it's important to know which physical key the user is holding down, the one on the right or the one on the left, the mask can be more specifically tested against these constants:
B_LEFT_SHIFT_KEY | B_RIGHT_SHIFT_KEY |
B_LEFT_CONTROL_KEY | B_RIGHT_CONTROL_KEY |
B_LEFT_OPTION_KEY | B_RIGHT_OPTION_KEY |
B_LEFT_COMMAND_KEY | B_RIGHT_COMMAND_KEY |
If no keyboard locks are on and the user isn't holding a modifier key down, the modifiers mask will be 0.
The modifiers mask is returned by the modifiers() function and, along with other keyboard information, by BView's GetKeys(). It's also included as a "modifiers" entry in every BMessage that reports a keyboard or mouse event.
Most keys are mapped to more than one character. The precise character that the key produces depends on which modifier keys are being held down and which lock states the keyboard is in at the time the key is pressed.
The mapping follows some fixed rules, including these:
The default key map also follows the conventional rules for Caps Lock and Control:
However, if the lock doesn't affect the character, Shift plus the lock is the same as Shift alone. For example, Caps Lock-7 produces '7' (the lock is ignored) and Shift- 7 produces '&' (Shift has an effect), so Shift-Caps Lock-7 also produces '&' (only Shift has an effect).
When Control is used with a key that doesn't produce an alphabetic character, the character that's reported is the same as if no modifiers were on. For example, Control7 produces a '7'.
The Interface Kit defines constants for characters that aren't normally represented by a visible symbol. This includes the usual space and backspace characters, but most invisible characters are produced by the function keys and the navigation keys located between the main keyboard and the numeric keypad. The character values associated with these keys are more or less arbitrary, so you should always use the constant in your code rather than the actual character value. Many of these characters are also produced by alphabetic keys when a Control key is held down.
The table below lists all the character constants defined in the Kit and the keys they're associated with.
Key label | Key code | Character reported |
---|---|---|
Backspace | 0x1e | B_BACKSPACE |
Tab | 0x26 | B_TAB |
Enter | 0x47 | B_ENTER |
(space bar) | 0x5e | B_SPACE |
Escape | 0x01 | B_ESCAPE |
F1 - F12 | 0x02 through 0x0d | B_FUNCTION_KEY |
Print Screen | 0x0e | B_FUNCTION_KEY |
Scroll Lock | 0x0f | B_FUNCTION_KEY |
Pause | 0x10 | B_FUNCTION_KEY |
System Request | 0x7e | 0xc8 |
Break | 0x7f | 0xca |
Insert | 0x1f | B_INSERT |
Home | 0x20 | B_HOME |
Page Up | 0x21 | B_PAGE_UP |
Delete | 0x34 | B_DELETE |
End | 0x35 | B_END |
Page Down | 0x36 | B_PAGE_DOWN |
(up arrow) | 0x57 | B_UP_ARROW |
(left arrow) | 0x61 | B_LEFT_ARROW |
(down arrow) | 0x62 | B_DOWN_ARROW |
(right arrow) | 0x63 | B_RIGHT_ARROW |
Several keys are mapped to the B_FUNCTION_KEY character. An application can determine which function key was pressed to produce the character by testing the key code against these constants:
B_F1_KEY | B_F6_KEY | B_F11_KEY |
B_F2_KEY | B_F7_KEY | B_F12_KEY |
B_F3_KEY | B_F8_KEY | B_PRINT_KEY (the "Print Screen" key) |
B_F4_KEY | B_F9_KEY | B_SCROLL_KEY (the "Scroll Lock" key) |
B_F5_KEY | B_F10_KEY | B_PAUSE_KEY |
Note that key 0x30 (P) is also mapped to B_FUNCTION_KEY when the Control key is held down.
You can look at the state of all keys on the keyboard at a given moment in time. This information is captured and reported in two ways:
In both cases, the bitfield is an array of 16 bytes,
uchar states[16];
with one bit standing for each key on the keyboard. Bits are numbered from left to right,
beginning with the first byte in the array, as illustrated below:
Bit numbers start with 0 and match key codes. For example, bit 0x3c corresponds to the A key, 0x3d to the S key, 0x3e to the D key, and so on. The first bit is 0x00, which doesn't correspond to any key. The first meaningful bit is 0x01, which corresponds to the Escape key.
When a key is down, the bit corresponding to its key code is set to 1. Otherwise, the bit is set to 0. However, for the three keys that toggle keyboard locks--Caps Lock (key 0x3b), Num Lock (key 0x22), and Scroll Lock (key 0x0f)--the bit is set to 1 if the lock is on and set to 0 if the lock is off, regardless of the state of the key itself.
To test the bitfield against a particular key,
For example:
if ( states[keyCode>>3] & (1 << (7 - (keyCode%8))) ) . . .
Here, the key code is divided by 8 to obtain an index into the states array. This selects the byte (the uchar) in the array that contains the bit for that key. Then, the part of the key code that remains after dividing by 8 is used to calculate how far a bit needs to be shifted to the left so that it's in the same position as the bit corresponding to the key. This mask is compared to the states byte with the bitwise & operator.
The classes in the Interface Kit work together to define a program structure for drawing and responding to the user. The two classes at the core of the structure--BWindow and BView--have been discussed extensively above. Other Kit classes either derive from BWindow and BView or support the work of those that do. The Kit defines several different kinds of BViews that you can use in your application. But every application does some unique drawing and has some application-specific responses to messages, so it must also invent some BViews of its own.
To learn about the Interface Kit for the first time, it's recommended that you first read this introduction, then look at the BView and BWindow class descriptions, followed by the descriptions of other classes as they interest you. It also might be useful to look at supporting classes--like BPoint and BRect--early.
The class overview should help you determine which specific functions you need to turn to in order to get more information about a class. The class constructor is often a good place to start, as it contains general information on how instances of the class are initialized.
If you haven't already read about the BApplication object and the messaging classes in the Application Kit, be sure to do so. A program must have a BApplication object before it can use the Interface Kit.
A reference to the Interface Kit follows. The classes are presented in alphabetical order, beginning with BAlert.
The Be Book, HTML Edition, for Developer Release 8 of the Be Operating System.
Copyright © 1996 Be, Inc. All rights reserved.
Be, the Be logo, BeBox, BeOS, BeWare, and GeekPort are trademarks of Be, Inc.
Last modified September 6, 1996.