advertisement

Print
Programming Embedded Systems in C and C++

Choosing a Compiler: The Little Things

by Michael Barr, author of Programming Embedded Systems in C and C++
12/29/2003

The Windows Sys Admin's Scripting Dilemma

Author's note: Let's face it — there's nothing sexy about the topic of cross compilers. If we were to draw an analogy between embedded software developers and carpenters, we might say that our cross compilers are most like their screwdrivers. We couldn't get the job done without one, but we spend very little time thinking about how they work or how they could make our work easier. But little differences between compilers can make a big difference in our success or failure on a given project.

Most of the time our choice of compiler is limited. It may be dictated to us by the hardware or system designer's choice of a processor or by our own choice of a real-time operating system or debugging tool. In such cases, we must put up with all of the annoyances of the particular compiler we're tied to. But what if you have more choice? What are the little things you should look for when comparing two or more cross compilers that will both work with your required hardware and software?

Just to be clear, let's understand that we're talking about C/C++ compilers that run on a PC, Mac, or Unix workstation and produce code for a specific target processor that is used in an embedded system. That's what we mean by a cross compiler. The target processor may be an 8-bit microcontroller, a 32-bit microprocessor, or even a DSP. Neither the host platform nor target processor matters for the purposes of this discussion.

As with life in general, the little details are easily overlooked, and yet they often matter most to our happiness in the long run. Little details make our use of a particular cross compiler easier and reduce our frustration with the project as a whole. Ideally, you should never have to think about your compiler. It should simply be a tool that you use to turn algorithms and rules for system behavior into executable programs.

It's obvious to most embedded systems designers that the efficiency and compactness of the code produced by a compiler is often tantamount to the success of a project. And if that's the case for you, be sure to select the best compiler in that regard. But if more than one compiler satisfies those requirements, or if those issues are not as important on your next project, you can make a decision based on little differences like those described below.

Related Reading

Programming Embedded Systems in C and C++
By Michael Barr

Inline Assembly

Though it has been more than 25 years since the introduction of the C programming language, it is still commonplace to use some amount of assembly language when developing software for embedded systems. On almost every project I've worked on there have been a few critical functions or algorithms that ran significantly faster when re-implemented by hand in assembly.

But interfacing assembly language routines with high-level language functions can be difficult. The programmer must study the parameter passing rules for function calls in the high-level language to learn what registers should be saved and restored on function entry and exit and how to return any result to the calling function. These are details that are much better handled by the compiler than the programmer. And there is often no advantage to implementing the entire function in assembly. Rather, all of the speedup can be achieved with just a few instructions of assembly placed strategically within the larger C/C++ function.

For example, early in my career I implemented a digital filtering algorithm in C and targeted it to a TI TMS320C30 DSP. The only cross compiler available to us at that time was unable to take advantage of a special processor instruction that performed exactly the mathematical operations I needed extremely fast. By replacing one of the for() loops in the filtering function with that one special 'C30 instruction, I was able to speedup the overall calculation by more than a factor of ten.

The compiler feature that made this possible was called inline assembly. This feature is not available in all cross compilers, and there is no standard for how it is implemented. The best implementations I have seen simply add a new asm keyword to the C language. Whatever follows on that line (or within the bracket-enclosed block that follows) is assembled rather than compiled. Even better, you can still refer to variables and the other symbols of your C/C++ program within the assembly language code. You need not know in advance which register or memory location the compiler will select as the container for the data you need.

Listing 1 contains a simple example of the use of inline assembly. In this example, assembly language is used to access an I/O port on an 80x86 processor. That processor family's in and out instructions cannot be invoked directly from C/C++. Assembly language is necessary at some level. Implementing it within the high-level language function is attractive because there is no programmer overhead involved in saving and restoring registers and because it is more efficient than calling a general-purpose I/O wrapper function (like inport() or outport() from <dos.h>).

#define LEDPORT  0xFF5E         /* LED Control Register (I/O space)   */

/**********************************************************************
 *
 * Function:    setLedMask()
 *
 * Description: Change the current state of a set of 8 LEDs.
 * 
 * Notes:       This function is 80x86-specific.
 *
 * Returns:     The previous state of the LEDs.
 *
 **********************************************************************/
unsigned char
setLedMask(unsigned char newMask)
{
    unsigned char  oldMask;
    
    asm {
        mov  dx, LEDPORT        /* Load the address of the register.  */
        in   al, dx             /* Read the current LED state.        */
        mov  oldMask, al        /* Save the old register contents.    */
        mov  al, newMask        /* Load the new register contents.    */
        out  dx, al             /* Modify the state of the LEDs.      */
    };

    return (oldMask);

}   /* setLedMask() */

Listing 1. Inline Assembly Example

Of course, the asm keyword is not a part of the ANSI C standard. And I'm sure that some people might argue that extending the language in a non-standard way like this is a bad idea. Don't get me wrong. I do think standards are a good thing, especially language standards. But let's face it. You're writing software for one particular embedded system, and if you need to use assembly language at all then your program will not be easily portable. In the embedded systems case, code portability is not as important as the ease of getting the program right the first time, even on a target processor you aren't that familiar with. Inline assembly makes the programmer's life easier, and it should be considered an important feature of your next cross compiler.

Interrupt Functions

Another desirable feature for a cross compiler is the interrupt type specifier. This non-standard keyword is a common addition to the C language for the PC platform. When used as part of a function declaration, it tells the compiler that that function is an interrupt service routine (ISR). The compiler can then generate the extra stack information and register saves and restores required for any ISR. (A good compiler will also prevent a function declared this way from being called by some other part of the program.)

It should be clear that the overhead associated with entering and exiting an ISR is no more or less in C/C++ than it is in assembly. Either way, the same set of opcodes must appear at the beginning and end of that block of code. It's within the body of the ISR that efficiency issues may arise. If the ISR is not particularly time-sensitive, the entire ISR could be written in C/C++. This would certainly make the implementation easier to write and understand. However, it is likely that the programmer will want to augment his high level language ISR with inline assembly where it will improve performance.

The advantages of the interrupt keyword are similar to those of inline assembly. The programmer doesn't have to know as much about the ISR-requirements of a particular processor. He need neither know what additional registers are saved and restored nor what special instruction, if any, is used to return from an interrupt. All of this makes his program more likely to work on the first try.

I have also seen this feature implemented as a processor-specific #pragma. For example, the GNU compiler (GCC) recognizes #pragma interrupt to mean that the next function in the file should be treated as an interrupt service routine. Unfortunately, only a few target processors (Hitachi H8/300 and Tandem ST-2000) are supported by this feature at this time.

If I were in the business of writing and selling compilers myself, I think I'd take the interrupt feature one step further. It's not a big stretch for the compiler to understand the structure of a processor's interrupt vector table. (This is the processor's table of addresses of ISRs, indexed by interrupt type.) That being the case, it would be simple to add an interrupt type to the ISR marker (e.g., #pragma interrupt(0x1E)). This would make automatic generation of the interrupt vector table possible and eliminate the potential for programmer misunderstanding or error.

Of course, all of these features should be options. I'm not suggesting that every project or programmer would be well served by C-wrappers for entirely assembly language ISRs, or by a tool that generates the interrupt vector table automatically. But there are many situations in which the programmer's task would be made easier and the entire project finished more quickly as a result.

Pages: 1, 2

Next Pagearrow