Optimization III: Selecting and Configuring InterBase Server Hardware

2004 Borland Convention

by Craig Stuntz

http://blogs.teamb.com/craigstuntz/articles/IBOptimization3.aspx

This session is part of a three-part series on creating high-performance systems using InterBase as the database back end. Part one (Optimization I: Optimizing InterBase Applications) explains how to optimize applications, part two (Optimization II: Optimizing InterBase SQL and Metadata) details how to optimize InterBase SQL and metadata. "Optimization III: Selecting and Configuring InterBase Server Hardware," completes the series.

Introduction

This paper explains how to specify and configure hardware for maximum InterBase server performance. There are some general principles which apply to nearly all installations, but other decisions are dependent upon the nature of your particular application. Where it is not possible to give a single recommendation which works best for all users, the paper will explain how to test and determine the best fit for a given installation.

Before trying to address performance problems with hardware configuration, make sure that your application, its use of SQL, and the metadata of your database is already optimized. If this is not the case, no amount of hardware tweaking will deliver optimal performance. Parts I and II of this series cover these subjects in detail.

The scope of this paper will be limited to how hardware choices affect InterBase.  This paper is not intended to be a general guide to buying or understanding computer hardware.

Specifying Hardware

For those lucky souls who have been given a budget to acquire a new system on which to run InterBase server, here are some important items to consider when selecting a machine or machines.

How Many Servers are Needed?

Indeed, the first item to consider is how many servers are needed for a particular application. It is generally unwise to run a second resource-intensive application on the same server machine as the InterBase server.  For example, it is probably a bad idea from a performance point of view to run Windows Terminal Server and InterBase server on the same machine, as both of these can be CPU and hard disk intensive. In general, the InterBase server should have a machine for itself unless there is a very good reason to put another application on that machine.

On the other hand, there are times when it is wise to run a resource intensive application which uses InterBase and the InterBase server on the same machine. The basic issue is resource contention vs. network bandwidth. Resource contention means a speed limitation caused by two processes which both need to use a hardware resource such as the CPU or the hard disks a lot. Network bandwidth is a limitation caused by the finite number of bits which can be squeezed through a TCP/IP connection in a given amount of time.

In some cases, the speed penalty imposed by network bandwidth can be so great that it outweighs the speed penalty imposed by resource contention. This is sometimes true when running an application server or an ISAPI or CGI object which uses InterBase.

The speed penalty imposed by resource contention can be minimized by partitioning the server machine -- use an SMP server with multiple hard drive arrays and use operating system an installation parameters to give each process its "own" hardware within the single machine. This will be discussed in more detail later on.

The speed penalty imposed by network bandwidth can be minimized by installing a fatter pipe between the application server and the InterBase server, such as Gigabit Ethernet.

In extreme cases, even having a dedicated InterBase server machine and a highly optimized application may not deliver the required performance for an application. In these cases, it will be necessary to use multiple InterBase servers and some form of data replication to meet performance specifications.

CPU

Not surprisingly, faster CPUs tend to deliver better performance. However, there are a few important points to consider.

In my experience, hard disk performance tends to become a bottleneck before the CPU does. Since people occasionally tend to view system performance only in terms of CPU GHz, it is important to remember that CPU speed is only one part of the performance equation.

CPU selection boils down to two questions: Which CPU and how many?

SMP

SMP stands for Symmetric Multi Processing, the ability for a single computer to use the power of multiple CPU chips concurrently.

Only InterBase 7 and higher can use multiple processors concurrently, and then only if CPU licenses are purchased. However, even older versions of InterBase may benefit from running on an SMP server if the InterBase server has to coexist with another resource-intensive application. In this case it is possible to set the processor affinity of each process so that each has a processor (or more) to itself. On the other hand, if a version of InterBase prior to version 7 is to have a machine to itself, then you should purchase a fast single processor box rather than a SMP machine.

InterBase 7 can use multiple processors when it is asked to do multiple things concurrently; for example, if two users are running queries at the same time. SMP support does not help InterBase when there is only a single connection to the server.  In fact, it can help performance to disable multithreading (by setting MAX_THREADS = 1 in ibconfig) when only one connection will use the server for an extended period of time.

If you are running an older version of InterBase server (prior to version 7) on a machine by itself, then SMP will probably not help performance, and may even hurt it. If running an older version of InterBase on a SMP server seems to hurt performance, this can be fixed by setting the processor affinity to a single processor for the ibserver process.

Hyper-Threading

Hyper-threading is essentially SMP on a single chip. A single physical processor is partitioned into two different logical processors. For a detailed explanation of hyper-threading, read this article on arstechnica.com.

This document at ibm.com discusses Linux support for hyper-threading. This document (regrettably, in Microsoft Word format) from microsoft.com describes Windows server support for hyper-threading.

Since hyper-threading and InterBase 7.0 were introduced very close together chronologically, IB 7.0 has no explicit support for hyper-threading. InterBase 7.1, however, introduces support for hyper-threading on the Windows platform. This means that InterBase 7.0's performance will be improved by a hyper-threading processor, but only if an additional CPU license is purchased for the "second" processor (since InterBase 7.0 cannot distinguish between physical and logical processors). However, it is not Borland's intention to charge their customers for hyper-threading support, and the update to InterBase 7.1, with explicit support for hyper-threading CPUs and no additional license requirement, is free.

If you install InterBase server on a server with a hyper-threading CPU and an OS with explicit support for hyper-threading, such as Windows Server 2003, you should make sure that you are using InterBase 7.1 Service Pack 2 or a later version of InterBase, as the hyper-threading support was improved in this version of InterBase. You need to turn on InterBase's hyper-threading support by uncommenting the ENABLE_HYPERTHREADING=1 line in the ibconfig file. Read Borland's "How to Use InterBase with Multiple Processors and Multithreaded Applications" for details.

InterBase for Linux does not at this time include explicit support for hyper-threading. So just like InterBase 7.0 on Windows, InterBase on Linux can benefit from a hyper-threaded CPU only if an additional CPU license is purchased for the "second" logical processor or hyper-threading is disabled in the BIOS settings. Hopefully this support will be added in a future version of InterBase for Linux.

When running older versions of InterBase (prior to 7) on a hyper-threading processor, you should either disable hyper-threading in the BIOS settings (preferably) or use the process affinity mask to constrain InterBase to a single logical CPU in order to prevent performance problems similar to those that older versions of InterBase server experiences on a SMP system.

Hard Disk

Selecting a fast and reliable hard disk drive configuration is one of the most important decisions to make when configuring an InterBase server. You will get more performance bang for your buck from a fast hard drive than just about any other area of server configuration, with the possible exception of networking hardware.

Hard disk drives can transfer data very quickly when they are reading adjacent tracks, but they must interrupt this transfer every time they need to find data on another part of the disk. For this reason, the InterBase server is designed to read data in storage order whenever possible. This works well when the InterBase server does not have to fight with another process for use of the disk. For this reason, it is a good idea to store the InterBase database on a separate physical hard drive. In a simple configuration, put the operating system and its virtual memory cache, along with all applications, on one drive and the InterBase database on a second drive. If you have more physical drives available, consider giving the virtual memory cache and the InterBase database cache each a drive to themselves

IDE vs. SCSI

A complete discussion of the pros and cons of IDE and SCSI drives is outside of the scope of this paper. See the mass storage section of arstechnica.com if you'd like to read more on this subject. For the purposes of this article, it is enough to say that there are no InterBase-specific reasons to choose one interface over the other. The faster the drive, the faster InterBase will perform, no matter which type it is.

RAID

RAID stands for Redundant Array of Inexpensive Disks. RAID controllers use multiple disk drives to increase speed, reliability, or both. Different RAID configurations provide different benefits. It is important to understand that some RAID configurations can actually hurt speed or reliability, so choose a system which meets your needs. This article on arstechnica.com is an excellent explanation of the costs and benefits of commonly available (and not-so-commonly available) RAID configurations.

Again, there's almost nothing InterBase-specific that you need to know when selecting a RAID configuration. If you choose a RAID configuration designed to increase performance, such as RAID 0 or RAID 5, you will find that InterBase performance is boosted accordingly.

RAID configurations designed to increase HDD system reliability, such as RAID 1 or RAID 5, eliminate any need to shadow an InterBase database. As far as I know, there is no benefit to a database shadow which an appropriate RAID configuration can't also deliver, and the performance cost of a shadow and low dollar cost of HDDs makes RAID the clear winner.

Memory

InterBase server by itself uses very little memory other than the database cache (which can be very large, depending upon settings) and the sort memory size. When specifying memory requirements for an InterBase server, the best thing to do is to test to determine the optimal database cache settings for your application, add in the sort memory size, and calculate the memory requirements accordingly. This is discussed in more detail below.

On Windows NT, 2000, XP, and Server 2003 systems support a 4 GB address space, but applications are limited to accessing 2 GB of physical memory by default with the remainder reserved for the OS. It is possible to increase the application limitation to 3 GB via a boot.ini switch, but applications must have specific support for this feature, and, as of version 7.5 InterBase does not have this support. So InterBase itself can use a maximum of 2 GB of physical RAM, but if you intend to have IB use this much memory you'll need to install enough additional memory for OS functionality.

Networking

Networking hardware is a frequently overlooked but important component of overall InterBase system performance. Network bandwidth is frequently the first bottleneck an InterBase application will hit, particularly if the application designer is less than careful about the amount of data the client requests.

All machines in a LAN should use at least 100 MBPS Ethernet. These cards are cheap enough and the benefit is sufficient that there is no reason to keep 10 MBPS hardware around. Keep in mind that even a single piece of 10 MBPS equipment can bring the entire network down to that speed, even if all of the other devices are 100 MBPS. Use good-quality, CAT-5 Ethernet cable; do not attempt to use regular telephone cable.

In a few cases, Gigabit Ethernet should be considered. For most connections, this is overkill, but if an application server and the InterBase server are on separate machines and there are a large number of active clients, the bandwidth may be great enough to justify a Gigabit Ethernet connection.  Gigabit Ethernet adapters generally cost between $200-400 each, and cable is more expensive than regular CAT-5 Ethernet cable.

Software settings and will be covered in more detail below.

Configuring the InterBase Server

This section is not intended to be a complete guide to configuring an InterBase server but rather focuses on those areas of system configuration which affect server performance.

Operating System

Which operating system?

In short, whichever OS you are most comfortable with. Performance differences between the same version of InterBase running on different operating systems are so minor that they're outweighed by a skilled administrator's benefit of being able to properly configure the software and the machine.

An exception is Windows 95, 98, and ME. While InterBase will run on all of these, they are not appropriate OSs for supporting a database server. Windows NT, 2000, XP, and Windows Server 2003 are all acceptable choices, but keep in mind that only InterBase 7.0 and higher support XP, and only InterBase 7.1 supports Windows Server 2003.

In terms of stability, the single most important thing you can do no matter which OS you use is reduce the number of other programs running on the InterBase server. I recommend building the server machine from scratch (formatting all hard drives and reinstalling the operating system) when possible. This both reduces the chance of a complete system failure and increases your ability to configure the system in a way which benefits InterBase.

Windows System Restore

Windows System Restore is a "feature" of Windows ME and XP. You can learn more about it in this article on MSDN. For some unfathomable reason which has never been explained by Microsoft, GDB is one of the file extensions "protected" by System Restore. This means that every time a file with the extension GDB changes, Windows will back it up. Naturally, this happens rather frequently with a database file. Because of this, the default extension for an InterBase database changed from GDB to IB with InterBase 7. If you are running an older version of InterBase on Windows XP, you need to follow these steps to prevent System Restore from killing server performance.

DNS

When a client connects to an InterBase server using the TCP/IP protocol, the first thing the client needs to do is to use the DNS services to look up the address of the server. Depending upon how well the local network is tuned, this can sometimes take as much as a couple seconds to do. If the client only ever connects to a few InterBase servers and if those servers have static IP addresses, as is often the case, this lookup can be bypassed by hard-coding the IP address for the server on the client. This can significantly increase the speed of connect times by eliminating the DNS lookup.

On a Windows system, for example, add the server's IP address to the {Windows root directory}system32driversetchosts file.

InterBase Configuration Parameters

InterBase is intended to be a near-zero maintenance database, and there's no need to spend hours tweaking configuration parameters. Simply installing the server and accepting the defaults will produce acceptable performance in many cases. There are a few settings the administrator can alter, and I'll discuss the more important options here. All of these settings can be altered in the ibconfig file, though a few of them can also be changed in other places.

Database Linger

Database linger is a feature new in InterBase 7.5 which allows the server to remain active after all users have disconnected. This means that the work of garbage collection is somewhat less likely to interfere with the work of interactive users, memory on the heap can be reclaimed, and it is no longer necessary for the server to flush and refresh the shared cache when the last user disconnects and the first user connects. In previous versions of InterBase, garbage collection and the like would cease when no users were logged in. I recommend activating Database Linger unless the server is shared with other applications and having disk/memory load from the IB server when it is not in use is unacceptable.

To enable database linger, type ALTER DATABASE SET LINGER INTERVAL <seconds> in IBConsole, where <seconds> is a large enough number to prevent the server from disconnecting the DB during the average interval when no user is connected to the DB.

Processor Affinity and Hyper-Threading

If running InterBase 7 on a SMP machine and multiple clients are using the InterBase server concurrently, maximum performance may be obtained by purchasing one or more CPU licenses so that the InterBase server can use all processors on the machine. InterBase server can use a single processor, plus one additional processor for each CPU license purchased.

If InterBase and another processor-intensive application such as an application server are sharing a single SMP machine, you may want to set the processor affinity for each process to prevent the operating system from switching each process back and forth between the physical processors. This is usually a good idea if the CPU load imposed by the IB server and the application server is relatively consistent. If the load from different processes varies quite a lot, then it may be more efficient to let the operating system handle processor scheduling (in other words, do not set processor affinity for any process). Only operational testing can determine which is truly the best configuration for your particular installation.

In InterBase 6.0 and earlier, the freeware IB_Affinity program can be used to bind InterBase to a CPU. In InterBase 6.5 and higher, the CPU_AFFINITY parameter in the ibconfig file can be used to specify which processors will be used to run InterBase Server. Since only InterBase 7 and higher can exploit multiple CPUs concurrently, it is generally a good idea to bind earlier versions of InterBase to a single CPU.

If you are running InterBase on a server with one or more hyper-threading CPUs, make sure you are using InterBase 7.1 SP 2 or later, as hyper-threading support was enhanced in this release, and an operating system such as Windows Server 2003.

Forced Writes

Forced writes is a performance vs. database stability tradeoff. With forced writes on, InterBase will attempt to always keep the database file in a stable state on disk. With forced writes off, InterBase will write to the database file in the most efficient manner possible with respect to performance, even if that means that the database may be in an unstable state on disk for some time. As long as the server does not crash, everything is fine and performance is significantly increased. If the server does crash, you may find that the database file is corrupt.

In versions of InterBase prior to 7.5, forced writes vs. no forced writes was a "take it or leave it" proposition. InterBase 7.5, however, introduces a couple of new features which give a "middle ground" between the speed of no forced writes and the data security of forced writes:

So in InterBase 7.5 there are four choices instead of two in the stability/performance tradeoff:

← More Data Stability

Better Performance →

Forced writes, no group commit Forced writes, group commit No forced writes, database flush interval No forced writes, no database flush interval

The following conditions should all be true before forced writes is disabled:

That's a lot to check, because turning forced writes on is essentially taking a risk that your system is stable. The performance benefit is worth the work, however.

Database Cache (Buffers)

The InterBase database cache represents the majority of the memory used by the InterBase server. The amount of memory allocated by the InterBase server for the database cache depends upon the number of databases in use, the cache size, and the page size of each database in use. The database cache is shared by all users of a particular database, but each different database in use has its own cache.

Cache size can be specified per server, per database, per attachment, or any combination of these. If no setting is specified anywhere, the server's default will be used. The default varies depending upon the version of InterBase -- for InterBase 7 it's 2048 DB pages per DB in use. The interaction between these settings is discussed in this article by Ivan Prenosil, but to keep things simple I recommend choosing one method of setting the cache size and sticking with it. Unless you're regularly creating and deleting databases, I suggest setting the database cache at the database level using gfix -buffers or Database Properties in IBConsole. Setting the cache size at the database level prevents attachments from increasing the cache size (this is not a setting that you want clients to control!) and it does not affect the cache allocated for the security database, admin.ib. Note that when setting the database cache value at the database level the new setting will not take effect until all connections to the database are dropped.

While the default settings produce a useable server and avoid using so much memory as to interfere with other applications, increasing the cache size can often improve performance, and is recommended especially on a server machine which is dedicated to InterBase. Care should be taken not to set the cache size too high, as forcing the operating system to use virtual memory for the cache will hurt performance instead of improving it. In short, don't direct IB to use more memory than is physically present on the machine and available to IB (i.e., memory not in use by the OS and other concurrent processes).

Versions of InterBase prior to 6.5 also had a bug which could cause performance problems in some cases when the cache size was set too high. Popular mythology pegged this limit at 10000 buffers, but it's nonexistent in InterBase 6.5 and later. The best setting will depend upon the amount of memory installed on the server, what else is running on the server, the version of InterBase you're using, and what sort of queries you're running. The maximum number of page buffers was increased to 131070 in InterBase 7.5.

To determine the ideal database cache size, first calculate the practical upper limit by determining how much physical memory is available. Note that this will always be less than the amount of physical memory installed on the server. Use operating system tools to determine this value. Next, for each database in use, determine the page size. This information is available in the Database Properties dialog in IBConsole.

For example, if you plan to be using a single database with a page size of 4096 on a server with 256 MB of free physical RAM, the maximum database cache setting would be 64000. However, the security DB will get its own cache, and you probably don't want to use every available byte of memory for the database cache, as InterBase requires additional memory (e.g., the sort buffer) and the memory requirements of other programs and the OS itself can change over the course of time.  So the practical upper limit in this case is probably in the neighborhood of around 50000-60000.

Once you know the upper limit, start testing. The best test is to use the most server-intensive portion of your real-world application. Another simple test is to backup and restore the database. Start with the default cache setting to get a good baseline. Test, then change the cache setting to the maximum value. Test again, then pick a value halfway between these two. Keep bisecting values until you have a reasonable cache-to-performance map for your application and your server.

Networking Settings

Network bandwidth, discussed above, is only one of the performance issues affecting communications between client and server. Other potential issues include DNS lookups and the number of packets sent back and forth.

Imagine an application needs to SELECT data from InterBase. This involves a number of network calls from the client to the server, and replies from the server to the client. The statement is prepared, parameters are bound, the statement is executed, multiple (perhaps hundreds of) rows are fetched, and the statement is unprepared. All of this back and forth between client and server imposes a certain amount of overhead.

In order to keep the number of packets sent back and forth to a minimum, InterBase will fill up network packets, which have a fixed size, with as many records as will fit, even if they have not been requested by the client yet. This way, they will already be in the InterBase client's memory when the application requests them, and no additional network calls will need to be made.

By changing the TCP_REMOTE_BUFFER size in the ibconfig file, it is possible to change the size of the TCP/IP packet. This allows the server to send more or less data across the network at a time. Increasing this setting may be a good idea if your application regularly fetches a large number of records and is saturating the available network bandwidth, but keep in mind that it can also make non-fetching operations slower. In most cases there will not be a need to change this setting.

Another cause of networking-related performance problems is failing network hardware. A broken router or NIC, or a spotty cable can cause many network packets to be lost. This means the network clients must continually resend information, hurting performance, or even making the whole system fail. InterBase 7.1 makes it much easier to diagnose which piece of hardware is causing problems by stamping error messages in the interbase.log file with the name of the client whose connection caused the error. The basic strategy for figuring out what is causing the problem is to replace one piece of network hardware at a time until the errors go away. Knowing which client's connection is causing the problem significantly reduces the number of items to check.

Shadowing

RAID has eliminated any real need for database shadows. Use shadows only when additional data file stability is needed, but is not important enough to justify the cost of a RAID controller/array, and when performance isn't important. When shadows are used, they should be on a separate physical hard drive to reduce the risk of hardware failure.

Temporary File Location

The TMP_DIRECTORY line in the ibconfig file tells the server which location to use when it is sorting a result set too big to fit in memory and needs to write a file to disk. The query will fail with a cryptic message to the end user if this location doesn't exist or runs out of space. By default, C:\TEMP is used. To override this, uncomment TMP_DIRECTORY and follow it with the preferred location, which must be enclosed in double quotes, and the maximum amount of space to use, in bytes. You can include more than one TMP_DIRECTORY entry in ibconfig if no single drive will have a sufficient amount of free space.

Sort Memory

In InterBase 7.1 Service Pack 1 you can tell the IB server to use more RAM when sorting records in memory by changing the SORTMEM_BUFFER_SIZE value in ibconfig.

Sweep

The sweep interval controls when the server will sweep the database. The explanation for what this number means is more complicated than I can cover in this paper, but I do explain it in detail in my article "Understanding Transaction Lifetimes." Generally you'll want to leave this at the default setting, but since the sweep can interfere with foreground operations in older versions of InterBase you can set it to 0 and then manually sweep the database using gfix at night. The sweep's operation should not interfere with foreground operations in InterBase 7.1, however.

The opposite problem can happen, though. If you are doing heavy, nonstop OLTP it is possible to create so many old record versions that the garbage collector cannot keep up, because it is yielding to foreground threads which are creating more old record versions. In this case you can change the SWEEP_YIELD_TIME to 0 in ibconfig. This tells the sweep to not give up its timeslice to user threads. InterBase 6 and earlier does not have this feature.

Security

Security has nothing to do with performance and is therefore outside the scope of this paper. However, when configuring a new server it is important to realize that InterBase security depends upon correct use of operating system security. In particular, the directory containing InterBase database files should be readable and writeable only by the process which is running the InterBase server, and you should change the password of SYSDBA to be something other than the default.

Operational Testing and Adjustments

The title of this section is a bit of a misnomer. InterBase works reasonably well out of the box. As detailed above, some attention to specifying hardware and configuring the small number of options provided can significantly improve performance in medium-to-high-demand installations. But InterBase does not require a full-time DBA or continuous tuning to maintain this level of performance.

There are a few things you can do to keep things running smoothly, which I'll detail here. But real-time tweaking should never be required, and if you never do anything beyond backing up the DB files from time to time, that's OK.

Backup Regularly

InterBase's included backup applications, the command-line gbak and the backup function in IBConsole, can be used while users are active in the database. InterBase backup runs in a SNAPSHOT transaction and therefore backs up a version of the database representing all committed transactions at the instant the backup started.

No other backup tool can be used while the InterBase server is active!

If you need to backup your InterBase database using a non-InterBase backup tool, then you should first backup the database using InterBase's backup tool and when that is complete backup the backup file using your non-InterBase backup tool. In other words, the non-InterBase backup tool should be set to exclude .GDB and .IB files.

Restore When Possible

Restoring an InterBase backup rebuilds the database from scratch. This has the nice side effect of rebuilding all indices, updating index statistics, and redistributing data across data pages in the database file, which can speed everyday operations. But restoring from a backup means that nobody may make changes to the original database from the time the backup begins until the time the restore is complete, so it can only be done during off hours. It's a good idea to use IBConsole or gfix to shutdown the database while doing this. Shutting down a database prevents anyone other than SYSDBA or the DB owner from connecting, and thus reduces the chances that night-owl workers will lose work they accidentally do during the backup and restore cycle.

I recommend always restoring to a different filename than the source database; never overwrite the source. When the restore is complete, rename the source file and the restored file, so that new connections will reach the restored database. There are a couple of benefits to doing this. If anything goes wrong during the backup and restore, you'll still have the source database. Also, InterBase server does not prevent clients from connecting to the restored database during the restore, but the results that a client sees while connected to a partially-restored database can be unexpected, to say the least.

Check for Database Corruption

As I explain in my article Known Causes of Corruption in InterBase Databases, InterBase is a very stable database server and corruption is rare. When corruption does occur, then, it is important to determine the cause of the corruption, because it indicates that something is wrong with the installation and it is important to fix it. Database corruption should not be a routine occurrence.

It is a good idea to run Database Validation or command-line gfix from time to time so that if corruption does occur it can be caught early, before the damage to database is too severe to repair. If database corruption is detected, consult the linked article for possible causes and solutions.

Diagnose Runtime Performance Issues

InterBase 7.0 includes powerful features to analyze server use in real time. InterBase Performance Monitor (included in IBConsole in InterBase 7.1) is a GUI tool which makes using these features easy. If your users are reporting performance problems, point Performance Monitor at your InterBase server when the server seems slow.

You can check to see if particular SQL statements are bogging down the server by clicking the Statements tab and clicking the Quantum column in the grid to resort the grid by the amount of work the server must do to execute the statement. If you find statements which seem to be causing the server to do a lot of work, you can determine who is running the statements by clicking the Find Attachment button. You can also look at the SQL and analyze it in a tool such as InterBase PLANalyzer.  SQL optimization is discussed in detail in the companion paper, Optimization II: Optimizing InterBase SQL and Metadata.

Also, you can check for long-running transactions by selecting the Transactions tab and clicking the Elapsed time column to resort the grid by the length of the transaction. Transactions active for hours on end can slow down the server, although in InterBase 7.1 it is possible to keep transactions open indefinitely if certain transaction options are used. This is discussed in detail in the companion paper, Optimization I: Optimizing InterBase Applications.

Conclusion

It's hard to go terribly wrong when specifying an InterBase server, because InterBase works well on a wide variety of hardware. This article gives some suggestions for how to apportion your spending, but don't fret if someone hands you a server and tells you to make it work -- it it's a decent machine at all, InterBase will probably be happy.

Regarding server configuration, the advice in this paper should help you make your InterBase server run faster, but shouldn't be considered a prerequisite to each and every installation you do. Running the installer should be enough to give you a working server with reasonable performance. The rest is icing on the cake.