Target 32- and 64-bit Platforms Together with a Few Simple Datatype Changes

Target 32- and 64-bit Platforms Together with a Few Simple Datatype Changes

n the early 1990s, 64-bit systems were considered “a solution waiting for a problem.” In 2005, however, this technology is rapidly gaining a critical mass of users. Even if you’re aiming at 32-bit platforms for the time being, adding 64-bit compliance to your coding checklist guarantees a smooth migration to 64-bit platforms in the future. This solution shows you how to write dually-targeted code.

How to write single-source code that can be deployed on both 32- and 64-bit environments without necessitating code rewriting?

Follow the 64-bit compliance coding guidelines.

When I’m 64
64-bit architectures offer several advantages:

  • Large file support: a large file is one that contains 2GB of data or more.
  • 64-bit addressing: a 64-bit process has a virtual address space of 16 exabytes; that’s about four billion times larger than the current upper limit of 4GB.
  • Breaking the 4GB RAM barrier: on 64-bit systems, the theoretical upper limit of physical RAM is 16 exabytes.
Author’s Note: The memory units used in 64-bit environments are:

  • 1 terabyte = 1024 gigabyte
  • 1 petabyte = 1024 terabytes = 1048576 gigabytes
  • 1 exabyte = 1024 petabytes = 1048576 terabytes=1,073,741,824 gigabytes

To benefit from these advantages (and others) you must recompile and relink the original source files using a 64-bit compiler and 64-bit libraries, respectively. The good news is that existing 32-bit executables will still run on a 64-bit kernel. The converse isn’t true; 64-bit binaries will only run on a 64-bit kernel.

ILP32 Versus LP64
The problems associated with writing 64-bit compliant programs stem from the different datatype models of 32-bit and 64-bit systems. Under the 32-bit datatype model known as ILP32, int, long, and pointers all have the same size of 32-bits. Under the 64-bit datatype model which is called LP64, int still occupies 32-bits, whereas long and pointers occupy 64-bits. The following table summarizes the differences between the two models with respect to the number of bits in each fundamental type:

C/C++ native type







usually the same as char

usually the same as char










long long



pointers (of data and functions)












long double



Platform-independent Datatype Widths
When writing dually-targeted code, it is important to clearly define which objects should have a fixed width, regardless of the target architecture. Fixed-width datatypes may be required in the following cases:

  • Reading data from a network connection
  • On-disk data that was written by a 32-bit application
  • Interfacing with certain binary standards

For such fixed-width datatypes, use the standard typedefs defined in :

int8_t   int16_t   int32_t   int64_t //signed uint8_t   uint16_t   uint32_t   uint64_t //unsigned 

Strictly speaking, is a C99 header. However, most C++ compilers nowadays support it.

Automatic Mapping of Platform-dependent Datatypes
C and C++ define standard typedefs such as size_t, ptrdiff_t, fpos_t, time_t, etc., that abstract the underlying type of an object, thereby enabling the compiler to map them to the suitable native datatype automatically. Always use these typedefs in dually-targeted code. Take for example size_t. In ILP32 it’s defined as 32-bit unsigned integer. However, in LP64 it’s defined as an unsigned 64-bit integer.

High-level libraries such as usually offer a clean migration path because they use abstraction layers. For instance, the tellp() function returns a std::streampos object rather than an int. Likewise, fstream::write() is declared as follows:

typedef size_t streamsize;basic_ostream&  write(const char_type* s, streamsize n);

That said, you can still use fundamental types such as int and long as loop counters, array indexes, file descriptors, etc.

Pointer Issues
Several libraries cast pointers to integral types and vice versa. Examples of this include the standard function:

handler signal(int signum, handler); 

Here, handler may either be a pointer to a function or one of the two integral constant SIG_DFL and SIG_IGN. The POSIX library uses this idiom too in its dlsym() function. The problem is that a pointer’s size is platform-dependent; you certainly don’t want to store a 64-bit pointer in an int. The recommended approach is to use the intptr_t and unintptr_t typedefs as integral datatypes that can safely hold a pointer.

ABI IssuesAn Application Binary Interface (ABI) specifies the binary representation of a programming language’s entities, including the name mangling scheme, memory layout, and the default alignment. Consider the following struct:

struct Record{ long idx; bool cached;};

On a typical 32-bit system Record occupies eight bytes. However, on 64-bit systems its size increases to 12 or 16 bytes. Therefore, never assume that the size of a data structure is invariant. Even if it consists of fixed-width data members, the target platform’s alignment scheme may still affect its total size. Even abstract classes are subjected to ABI-issues:

class NetInterface{public: int virtual Connect(enum conn_type)=0; //..additional members virtual ~NetInterface()=0;};

NetInterface contains no overt data members. Yet as all polymorphic classes, it contains an implicit vptr whose size is platform dependent. Similarly, the member functions’ mangled names are also ABI-dependent. The upside is that accidentally linking 64-bit libraries with 32-bit object files (or vice versa) would fail noisily.

I/O Formatting
The printf() family of functions uses format flags that control the output’s justification, notation, and width. Always use the “%p” flag for pointers, the l- prefix for arguments of types, long and ul- for their unsigned counterparts. Check also that enough room is allowed for output when using either or . Finally, ensure that char buffers are large enough to accommodate up to 20 characters for unsigned long and 18 hex characters for pointer.

Share the Post:
Heading photo, Metadata.

What is Metadata?

What is metadata? Well, It’s an odd concept to wrap your head around. Metadata is essentially the secondary layer of data that tracks details about the “regular” data. The regular

XDR solutions

The Benefits of Using XDR Solutions

Cybercriminals constantly adapt their strategies, developing newer, more powerful, and intelligent ways to attack your network. Since security professionals must innovate as well, more conventional endpoint detection solutions have evolved

AI is revolutionizing fraud detection

How AI is Revolutionizing Fraud Detection

Artificial intelligence – commonly known as AI – means a form of technology with multiple uses. As a result, it has become extremely valuable to a number of businesses across

AI innovation

Companies Leading AI Innovation in 2023

Artificial intelligence (AI) has been transforming industries and revolutionizing business operations. AI’s potential to enhance efficiency and productivity has become crucial to many businesses. As we move into 2023, several

data fivetran pricing

Fivetran Pricing Explained

One of the biggest trends of the 21st century is the massive surge in analytics. Analytics is the process of utilizing data to drive future decision-making. With so much of

kubernetes logging

Kubernetes Logging: What You Need to Know

Kubernetes from Google is one of the most popular open-source and free container management solutions made to make managing and deploying applications easier. It has a solid architecture that makes