Tuesday, August 22, 2017

GCC low-level IR and basic code generation

This is part three of a series “Writing a GCC back end”.

Compilers are usually described as having three parts – a front end that handles the source language, a middle that optimizes the code using a target-independent representation, and a back end doing the target-specific code generation. GCC is no different in this regard – the front end generates a target-independent representation (GIMPLE) that is used when optimizing the code, and the result is passed to the back end that converts it to its own representation (RTL) and generates the code for the program.

The back end IR

The back end’s internal representation for the code consists of a linked list of objects called insns. An insn corresponds roughly to an assembly instruction, but there are insns representing labels, dispatch tables for switch statements, etc.. Each insnsn is constructed as a tree of expressions, and is usually written in an LISP-like syntax. For example,
(reg:m n)
is an expression representing a register access, and
(plus:m x y)
represents adding the expressions x and y. An insn adding two registers may combine these as
(set (reg:m r0) (plus:m (reg:m r1) (reg:m r2)))
The m in the expressions denotes a machine mode that defines the size and representation of the data object or operation in the expression. There are lots of machine modes, but the most common are
  • QI – “Quarter-Integer” mode represents a single byte treated as an integer.
  • HI – “Half-Integer” mode represents a two-byte integer.
  • SI – “Single Integer” mode represents a four-byte integer.
  • DI – “Double Integer” mode represents an eight-byte integer.
  • CC – “Condition Code” mode represents the value of a condition code (used to represent the result of a comparison operation).
GCC supports architectures where a byte is not 8 bits, but this blog series will assume 8-bit bytes (mostly as I often find it clearer to talk about a 32-bit value in examples, instead of a more abstract SImode value).

Overview of the back end operation

The back end runs a number of passes over the IR, and GCC will output the resulting RTL for each pass when -fdump-rtl-all is passed to the compiler.

The back end starts by converting GIMPLE to RTL, and a small GIMPLE function such as
foo (int a)
{
  int _2;

  <bb 2> [100.00%]:
  _2 = a_1(D) * 42;
  return _2;
}
is expanded to
(note 1 0 4 NOTE_INSN_DELETED)
(note 4 1 2 2 [bb 2] NOTE_INSN_BASIC_BLOCK)
(insn 2 4 3 2 (set (reg/v:SI 73 [ a ])
        (reg:SI 10 a0 [ a ])) "foo.c":2 -1
     (nil))
(note 3 2 6 2 NOTE_INSN_FUNCTION_BEG)
(insn 6 3 7 2 (set (reg:SI 75)
        (const_int 42 [0x2a])) "foo.c":3 -1
     (nil))
(insn 7 6 8 2 (set (reg:SI 74)
        (mult:SI (reg/v:SI 73 [ a ])
            (reg:SI 75))) "foo.c":3 -1
     (nil))
(insn 8 7 12 2 (set (reg:SI 72 [ <retval> ])
        (reg:SI 74)) "foo.c":3 -1
     (nil))
(insn 12 8 13 2 (set (reg/i:SI 10 a0)
        (reg:SI 72 [ <retval> ])) "foo.c":4 -1
     (nil))
(insn 13 12 0 2 (use (reg/i:SI 10 a0)) "foo.c":4 -1
     (nil))
The generated RTL corresponds mostly to real instructions even at this early stage in the back end, but the generated code it is inefficient and registers are still virtual.

The next step is to run optimization passes on the RTL. This is the same kind of optimization passes that have already been run on GIMPLE (constant folding, dead code elimination, simple loop optimizations, etc.), but they can do a better job with knowledge of the target architecture. For example, loop optimizations may transform loops to better take advantage of loop instructions and addressing modes, and dead code elimination may see that the operations working on the upper part of
foo (long long int a, long long int b)
{
  long long int _1;
  long long int _4;

  <bb 2> [100.00%]:
  _1 = a_2(D) + b_3(D);
  _4 = _1 & 255;
  return _4;
}
on a 32-bit architecture are not needed (as the returned value is always 0) and can thus be eliminated.

After this, instructions are combined and spit to better instruction sequences by peep-hole optimizations, registers are allocated, the instructions are scheduled, and the resulting RTL dump contains all the information about which instructions are selected and registers allocated
(note 1 0 4 NOTE_INSN_DELETED)
(note 4 1 17 [bb 2] NOTE_INSN_BASIC_BLOCK)
(note 17 4 2 NOTE_INSN_PROLOGUE_END)
(note 2 17 3 NOTE_INSN_DELETED)
(note 3 2 7 NOTE_INSN_FUNCTION_BEG)
(note 7 3 15 NOTE_INSN_DELETED)
(insn 15 7 12 (set (reg:SI 15 a5 [75])
        (const_int 42 [0x2a])) "foo.c":4 132 {*movsi_internal}
     (expr_list:REG_EQUIV (const_int 42 [0x2a])
        (nil)))
(insn 12 15 13 (set (reg/i:SI 10 a0)
        (mult:SI (reg:SI 10 a0 [ a ])
            (reg:SI 15 a5 [75]))) "foo.c":4 15 {mulsi3}
     (expr_list:REG_DEAD (reg:SI 15 a5 [75])
        (nil)))
(insn 13 12 21 (use (reg/i:SI 10 a0)) "foo.c":4 -1
     (nil))
(note 21 13 19 NOTE_INSN_EPILOGUE_BEG)
(jump_insn 19 21 20 (simple_return) "foo.c":4 211 {simple_return}
     (nil)
 -> simple_return)
(barrier 20 19 16)
(note 16 20 0 NOTE_INSN_DELETED)

Instruction patterns

The target architecture’s instructions are described in machine.md using instruction patterns. A simple instruction pattern looks like
(define_insn "mulsi3"
  [(set (match_operand:SI 0 "register_operand" "=r")
        (mult:SI (match_operand:SI 1 "register_operand" "r")
                 (match_operand:SI 2 "register_operand" "r")))]
  ""
  "mul\t%0,%1,%2")
which defines an insn with a name mulsi3 that generates a mul instruction for a 32-bit multiplication.

The first operand in the instruction pattern is a name that is used in debug dumps and when writing C++ code that generates RTL. For example,
emit_insn (gen_mulsi3 (dst, src1, src2));
generates a mulsi3 insn. The back end does not in general need to generate RTL, but the rest of GCC does, and the name tells GCC that it can use the pattern to accomplish a certain task. It is therefore important that the named patterns implement the functionality that GCC expects, but names starting with * are ignored and thus safe to use for non-standard instruction patterns. The back end should only implement the named patterns that make sense for the architecture – GCC will do its best to emit code for missing patterns using other strategies. For example, a 32-bit multiplication will be generated as a call to __mulsi3 in libgcc if the target does not have a mulsi3 instruction pattern.

The next part of the instruction pattern is the RTL template that describes the semantics of the instruction that is generated by the insn
[(set (match_operand:SI 0 "register_operand" "=r")
      (mult:SI (match_operand:SI 1 "register_operand" "r")
               (match_operand:SI 2 "register_operand" "r")))]
This says that the instruction multiplies two registers and places the result in a register. This example is taken from RISC-V that has a multiplication instruction without any side-effects, but some architectures (such as X86_64) sets condition flags as part of the operation, and that needs to be expressed in the RTL template, such as
[(parallel [(set (match_operand:SI 0 "register_operand" "=r")
                 (mult:SI (match_operand:SI 1 "register_operand" "r")
                          (match_operand:SI 2 "register_operand" "r")))
            (clobber (reg:CC FLAGS_REG))])]
where the parallel expresses that the two operations are done in as a unit.

The instruction’s operands are specified with expressions of the form
(match_operand:SI 1 "register_operand" "r")
consisting of four parts:
  • match_operand: followed by the machine mode of the operand.
  • The operand number used as an identifier when referring the operand.
  • A predicate telling what kind of operands are valid ("register_operand" means that it must be a register).
  • A constraint string describing the details of the operand ("r" means it must be a general register). These are the same constraints as are used in inline assembly (but the instruction patterns support additional constraints that are not allowed in inline assembly).
The predicate and constraint string contain similar information, but they are used in different ways:
  • The predicate is used when generating the RTL. As an example, when generating RTL for a GIMPLE function
    foo (int a)
    {
      int _2;
    
      <bb 2> [100.00%]:
      _2 = a_1(D) * 42;
      return _2;
    }
    
    then the GIMPLE to RTL converter will generate the multiplication as a mulsi3 insn. The predicates will be checked for the result _2 and the operands a_1(D) and 42, and it is determined that 42 is not valid as only registers are allowed. GCC will, therefore, insert a movsi insn that moves the constant into a register.
  • The constraints are used when doing the register allocation and final instruction selection. Many architectures have constraints on the operands, such as m68k that has 16 registers (a0a7 and d0d7), but only d0d7 are allowed in a muls instruction. This is expressed by a constraint telling the register allocator that it must use register d0d7.

The string after the RTL template contains C++ code that disables the instruction pattern if it evaluates to false (an empty string is treated as true). This is used when having different versions of the architecture, such as one for small CPUs that do not have multiplication instructions, and one for more advanced cores that can do multiplication. This is typically handled by letting machine.opt generate a global variable TARGET_MUL that can be set by an option such as -mmul and -march, and this global variable is placed in the condition string
(define_insn "mulsi3"
  [(set (match_operand:SI 0 "register_operand" "=r")
        (mult:SI (match_operand:SI 1 "register_operand" "r")
                 (match_operand:SI 2 "register_operand" "r")))]
  "TARGET_MUL"
  "mul\t%0,%1,%2")
so that the instruction pattern it is disabled (and the compiler will thus generate a call to libgcc) when multiplication is not available.

The resulting instruction is emitted using the output template
"mul\t%0,%1,%2"
where %0, %1, ..., are substituted with the corresponding operands.

More about define_insn

Many architectures need more complex instruction handling than the RISC-V mul instruction described above, but define_insn is flexible enough to handle essentially all cases that occur for real CPUs.

Let’s say our target can multiply a register and an immediate integer and that this requires the first operand to be an even-numbered register, while multiplying two registers requires that the first operand is an odd-numbered register (this is not as strange as it may seem – some CPU designs use such tricks to save one bit in the instruction encoding). This is easily handled by defining a new predicate in predicates.md
(define_predicate "reg_or_int_operand"
  (ior (match_code "const_int")
       (match_operand 0 "register_operand")))
accepting a register or an integer, and new constraints in constraints.md that require an odd or an even register
(define_register_constraint "W" "ODD_REGS"
  "An odd-numbered register.")

(define_register_constraint "D" "EVEN_REGS"
  "An even-numbered register.")
where ODD_REGS and EVEN_REGS are register classes (see part four in this series). We can now use this in the instruction pattern
(define_insn "mulsi3"
  [(set (match_operand:SI 0 "register_operand" "=r,r")
        (mult:SI (match_operand:SI 1 "register_operand" "%W,D")
                 (match_operand:SI 2 "reg_or_int_operand" "r,i")))]
  ""
  "mul\t%0,%1,%2")
The constraint strings now list two alternatives – one for the register/register case and one for the register/integer case. And there is a % character added to tell the back end that the operation is commutative (so that the code generation may switch the order of the operands if the integer is the first operand, or help the register allocation for cases such as
_1 = _2 * 42;
_3 = _2 * _4;
to let it change the order of arguments on the second line to avoid inserting an extra move to make _2 an even register on the first line and an odd register on the second line).

Sometimes the different alternatives need to generate different instructions, such as the instruction multiplying two registers being called mulr and the multiplication with an integer being called muli. This can be handled by starting the output template string with an @ character and listing the different alternatives in the same order as in the constraint strings
(define_insn "mulsi3"
  [(set (match_operand:SI 0 "register_operand" "=r,r")
        (mult:SI (match_operand:SI 1 "register_operand" "%W,D")
                 (match_operand:SI 2 "reg_or_int_operand" "r,i")))]
  ""
  "@
   mulr\t%0,%1,%2
   muli\t%0,%1,%2")
Finally, it is possible to write general C++ code that is run when outputting the instruction, so the previous could have been written as
(define_insn "mulsi3"
  [(set (match_operand:SI 0 "register_operand" "=r,r")
        (mult:SI (match_operand:SI 1 "register_operand" "%W,D")
                  (match_operand:SI 2 "reg_or_int_operand" "r,i")))]
  ""
  {
    return which_alternative == 0 ? "mulr\t%0,%1,%2" : "muli\t%0,%1,%2";
  })
This is usually not that useful for define_insn but may reduce the number of instruction patterns when the instruction names depend on the configuration. For example, mulsi3 in the RISC-V back end must generate mulw in 64-bit mode and mul in 32-bit mode, which is implemented as
(define_insn "mulsi3"
  [(set (match_operand:SI 0 "register_operand" "=r")
        (mult:SI (match_operand:SI 1 "register_operand" "r")
                 (match_operand:SI 2 "register_operand" "r")))]
  ""
  { return TARGET_64BIT ? "mulw\t%0,%1,%2" : "mul\t%0,%1,%2"; })
where TARGET_64BIT is a global variable defined in riscv.opt.

Further reading

This blog post has only scratched the surface of the RTL and machine description functionality, but everything is documented in “GNU Compiler Collection Internals”:

Sunday, August 13, 2017

Getting started with a GCC back end

This is part two of a series “Writing a GCC back end”.

Most CPU architectures have a common subset – they have instructions doing arithmetics and bit operations on a few general registers, an instruction that can write a register to memory, and an instruction that can read from memory and place the result in a register. It is therefore easy to make a compiler that can compile simple straight-line functions by taking an existing back end and restricting it to this common subset. This is enough to start running the test suite, and it is then straightforward to address one deficiency at a time (adding additional instructions, addressing modes, ABI, etc.).

My original thought was that the RISC-V back end would be a good choice as a starting point – the architecture is fully documented, and it is a new, actively maintained, backend that does not use legacy APIs. But the RISC-V back end has lots of functionality (such as support for multiple ISA profiles, 32- and 64-bit modes, and features such as position-independent code, exception handling and debug information) and the work of reducing it became unnecessarily complicated when I tried...

I now think it is better to start from one of the minimal back ends, such as the back end for the Moxie architecture. Moxie seems to be a good choice as there is also a blog series “How To Retarget the GNU Toolchain in 21 Patches” describing step-by-step how it was developed. The blog series is old, but GCC has a very stable API, so it is essentially the same now (I once updated a GCC backend from GCC 4.3 to GCC 4.9, which were released 6 years apart, and only a few lines needed to be modified...).

One thing missing from the Moxie blog series is how to build the compiler and how to configure and run the test-suite, but I blogged about that a while back in “Running the GCC test-suite for epiphany-sim”.

Sunday, August 6, 2017

The structure of a GCC back end

This is part one of a series “Writing a GCC back end”.

The GCC back end is configured in gcc/config.host and the implementation is placed in directories machine under gcc/config and gcc/common/config where “machine” is the name of the back end (for example, i386 for the x86 architecture).

The back end places some functionality in libgcc. For example, architectures that do not have an instruction for integer division will instead generate a call to a function __divsi3 in libgcc. libgcc is configured in libgcc/config.host and target-specific files are located in a directory machine under libgcc/config.

gcc/config.gcc

config.gcc is a shell script that parses the target string (e.g. x86_64-linux-gnu) and sets variables pointing out where to find the rest of the back end and how to compile it. The variables that can be set are documented at the top of the config.gcc file.

The only variable that must be set is cpu_type that specifies machine. Most targets also set extra_objs that specifies extra object files that should be linked into the compiler, tmake_file that contains makefile fragments that compiles those extra objects (or sets makefile variables modifying the build), and tm_file that adds header files containing target-specific information.

A typical configuration for a simple target (such as ft32-unknown-elf) looks something like
cpu_type=ft32
tm_file="dbxelf.h elfos.h newlib-stdint.h ${tm_file}"

gcc/config/machine

The main part of the back end is located in gcc/config/machine. It consists of eight different components, each implemented in a separate file:
  • machine.h is included all over the compiler and contains macros defining properties of the target, such as the size of integers and pointers, number of registers, data alignment rules, etc.
  • GCC implements a generic backend where machine.c can override most of the functionality. The backend is written in C,1 so the virtual functions are handled manually with function pointers in a structure, and machine.c overrides the defaults using code of the form
    #undef TARGET_FRAME_POINTER_REQUIRED
    #define TARGET_FRAME_POINTER_REQUIRED ft32_frame_pointer_required
    static bool
    ft32_frame_pointer_required (void)
    {
      return cfun->calls_alloca;
    }
    
  • machine-protos.h contains prototypes for the external functions defined in machine.c.
  • machine.opt adds target-specific command-line options to the compiler using a record format specifying the option name, properties, and a documentation string for the --help output. For example,
    msmall-data-limit=
    Target Joined Separate UInteger Var(g_switch_value) Init(8)
    -msmall-data-limit=N    Put global and static data smaller than <number> bytes into a special section.
    
  • adds a command-line option -msmall-data-limit that has a default value 8, and is generated as an unsigned variable named g_switch_value.
  • machine.md, predicates.md, and constraints.md contain the machine description consisting of rules for instruction selection and register allocation, pipeline description, and peephole optimizations. These will be covered in detail in parts 3–7 of this series.
  • machine-modes.def defines extra machine modes for use in the low-level IR (a “machine mode” in the GCC terminology defines the size and representation of a data object. That is, it is a data type.). This is typically used for condition codes and vectors.
The GCC configuration is very flexible and everything can be overridden, so some back ends look slightly different as they, for example, add several .opt files by setting extra_options in config.gcc.

gcc/common/config/machine

The gcc/common/config/machine directory contains a file machine-common.c that can add/remove optimization passes, change the defaults for --param values, etc.

Many back ends do not need to do anything here, and this file can be disabled by setting
target_has_targetm_common=no
in config.gcc.

libgcc/config.host

The libgcc config.host works in the same way as config.gcc, but with different variables.

The only variable that must be set is cpu_type that specifies machine. Most targets also set extra_parts that specifies extra object files to include in the library and tmake_file that contains makefile fragments that add extra functionality (such as soft-float support).

A typical configuration for a simple target looks something like
cpu_type=ft32
tmake_file="$tmake_file t-softfp"
extra_parts="$extra_parts crti.o crtn.o crtbegin.o crtend.o"

libgcc/config/machine

The libgcc/config/machine directory contains extra files that may be needed for the target architecture. Simple implementations typically only contain a crti.S and crtn.S (crtbegin/crtend and the makefile support for building all of these have default implementation) and a file sfp-machine.h containing defaults for the soft-float implementation.


1. GCC is written in C++03 these days, but the structure has not been changed since it was written in C.

Friday, August 4, 2017

Writing a GCC back end

It is surprisingly easy to design a CPU (see for example Colin Riley’s blog series) and I was recently asked how hard it is to write a GCC back end for your new architecture. That too is easy — provided you have done it once before. But the first time is quite painful...

I plan to write some blog posts the coming weeks that will try to ease the pain by showing what is involved in creating a “working” back end that is capable of compiling simple functions, give some pointers to how to proceed to make this production-ready, and in general provide the overview I would have liked before I started developing my backend (GCC has a good reference manual, “GNU Compiler Collection Internals”, describing everything you need to know, but it is a bit overwhelming when you start...)

The series will cover the following (I’ll update the list with links to the posts as they become available) 
  1. The structure of a GCC back end
    • Which files you need to write/modify
  2. Getting started with a GCC back end
    • Pointers to resources describing how to set up the initial back end
  3. Low-level IR and basic code generation
    • How the low-level IR works
    • How the IR is lowered to instructions
    • How to write simple instruction patterns
  4. Target macros
    • Size/number of registers
    • Register classes and allocation order
    • ABI
    • ...
  5. More advanced instruction patterns
    • “All” functionality of the instruction patterns definitions
    • Examples of how to use this functionality
  6. Peephole optimizations, etc.
  7. Pipeline description
  8. Cost model
  9. ...

Monday, July 24, 2017

Phoronix SciMark benchmarking results

Phoronix recently published an article “Ryzen Compiler Performance: Clang 4/5 vs. GCC 6/7/8 Benchmarks”, and there are many results in that article that surprises me...

One such is the result for SciMark that shows that GCC generates much slower code than LLVM – there is a big difference in several tests, and the composite score is 25% lower. I do not have any Ryzen CPU to test on, but my testing on Broadwell shows very little difference between GCC and LLVM when SciMark is compiled with -O3 -march=x86-64 as in the article, and the Ryzen microarchitecture should not introduce that big difference. And the reported numbers seem low...

The Phoronix testing also shows strange performance variations between different GCC versions that I don’t see in my testing – I see a performance increase for each newer version of the compiler.

The published test results are made running scripts available at OpenBenchmarking.org,  and looking at the build script for SciMark shows that it is built as
cc $CFLAGS -o scimark2 -O *.c -lm
Note the -O – this overrides the optimization level set by $CFLAGS and explains at least some of the discrepancies in the test results.1 GCC maps -O to the -O1 optimization level that is meant to be a good choice to use while developing – it optimizes the code, but focuses as much on fast compilation time and good debug information as on producing fast code. LLVM maps -O to -O2 that is a “real” optimization level that prioritizes performance, so it is not surprising that LLVM produces faster code in this case.

So the benchmarking result does not show what is intended, and both compilers can do better than what the test results show...


1. I get similar results as the article when I use -O, but my result for FFT is very different...

Thursday, July 20, 2017

A load/store performance corner case

I have recently seen a number of “is X faster than Y?” discussions where micro benchmarks are used to determine the truth. But performance measuring is hard and may depend on seemingly irrelevant details...

Consider for example this code calculating a histogram
int histogram[256];

void calculate_histogram(unsigned char *p, int len)
{
  memset(histogram, 0, sizeof(histogram));
  for (int i = 0; i < len; i++)
    histogram[p[i]]++;
}
The performance “should not” depend on the distribution of the values in the buffer p, but running this on a buffer with all bytes set to 0 and one buffer with random values gives me the result (using the Google benchmark library and this code)
Benchmark                Time           CPU Iterations
------------------------------------------------------
BM_cleared/4096       7226 ns       7226 ns      96737
BM_random/4096        2049 ns       2049 ns     343001
That is, running on random data is 3.5x faster compared to running on all-zero data! The reason for this is that loads and stores are slow, and the CPU tries to improve performance by executing later instructions out of order. But it cannot proceed with a load before the previous store to that address has been done,1 which slows down progress when all loop iterations read and write the same memory address histogram[0] .

This is usually not much of a problem for normal programs as they have more instructions that can be executed out of order, but it is easy to trigger this kind of CPU corner cases when trying to measure the performance of small code fragments, which results in the benchmark measuring something else than intended. Do not trust benchmark results unless you can explain the performance and know how it applies to your use case...


1. The CPU does “store to load forwarding” that saves cycles by enabling the load to obtain the data directly from the store operation instead of through memory, but it still comes with a cost of a few cycles.

Tuesday, July 4, 2017

Strict aliasing in C90 vs. C99 – and how to read the C standard

I often see claims that the strict aliasing rules were introduced in C99, but that is not true – the relevant part of the standard is essentially the same for C90 and C99. Some compilers used the strict aliasing rules for optimization well before 1999 as was noted in this 1998 post to the GCC mailing list (that argues that enabling strict aliasing will not cause many problems as most software already has fixed their strict aliasing bugs to work with those other compilers...)

C99 – 6.5 Expressions

The C standard does not talk about “strict aliasing rules”, but they follow from the text in “6.5 Expressions”:
An object shall have its stored value accessed only by an lvalue expression that has one of the following types:73
  • a type compatible with the effective type of the object,
  • a qualified version of a type compatible with the effective type of the object,
  • a type that is the signed or unsigned type corresponding to the effective type of the object,
  • a a type that is the signed or unsigned type corresponding to a qualified version of the effective type of the object,
  • an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union), or
  • a character type.

73 The intent of this list is to specify those circumstances in which an object may or may not be aliased.
Note the footnote that says that the intention of these rules is to let the compiler determine that objects are not aliased (and thus be able to optimize more aggressively).

C90 – 6.3 Expressions

The corresponding text in C90 is located in “6.3 Expressions”:
An object shall have its stored value accessed only by an lvalue that has one of the following types:36
  • the declared type of the object,
  • a qualified version of the declared type of the object,
  • a type that is the signed or unsigned type corresponding to the declared type of the object,
  • a type that is the signed or unsigned type corresponding to a qualified version of the declared type of the object,
  • an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union), or
  • a character type.

36 The intent of this list is to specify those circumstances in which an object may or may not be aliased.
It is similar to the text in C99, and it even has the footnote that says it is meant to be used to determine if an object may be aliased or not, so C90 allows optimizations using the strict aliasing rules.

But standard have bugs, and those can be patched by publishing technical corrigenda, so it is not enough to read the published standard to see what is/is not allowed. There are two technical corrigenda published for C90 (ISO/IEC 9899 TCOR1 and ISO/IEC 9899 TCOR2), and the TCOR1 updates the two first bullet points. The corrected version of the standard says
An object shall have its stored value accessed only by an lvalue that has one of the following types:36
  • a type compatible with the declared type of the object,
  • a qualified version of a type compatible with the declared type of the object,
  • a type that is the signed or unsigned type corresponding to the declared type of the object,
  • a type that is the signed or unsigned type corresponding to a qualified version of the declared type of the object,
  • an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union), or
  • a character type.

36 The intent of this list is to specify those circumstances in which an object may or may not be aliased.
The only difference compared to C99 is that it does not talk about effective type, which makes it unclear how malloc:ed memory is handled as it does not have a declared type. This is discussed in the defect report DR 28 that asks if it is allowed to optimize
void f(int *x, double *y) {
  *x = 0;
  *y = 3.14;
  *x = *x + 2;
} 
to
void f(int *x, double *y) {
  *x = 0;
  *y = 3.14;
  *x = 2; /* *x known to be zero */
}
if x and y point to malloc:ed memory, and the committee answered (citing the bullet point list from 6.3)
We must take recourse to intent. The intent is clear from the above two citations and from Footnote 36 on page 38: The intent of this list is to specify those circumstances in which an object may or may not be aliased.
Therefore, this alias is not permitted and the optimization is allowed.
In summary, yes, the rules do apply to dynamically allocated objects.
That is, the allocated memory gets its declared type when written and the subsequent reads must be done following the rules in the bullet-point list, which is essentially the same as what C99 says.

One difference between C90 and C99

There is one difference between the C90 and C99 strict aliasing rules in how unions are handled – C99 allows type-punning using code such as
union a_union {
  int i;
  float f;
};

int f() {
  union a_union t;
  t.f = 3.0;
  return t.i;
}
while this is implementation-defined in C90 per 6.3.2.3
[...] if a member of a union object is accessed after a value has been stored in a different member of the object, the behavior is implementation-defined.

Reading the standard

Language lawyering is a popular sport on the internet, but it is a strange game where often the only winning move is not to play. Take for example DR 258 where the committee is asked about a special case in macro-expansion that is unclear. The committee answers
The standard does not clearly specify what happens in this case, so portable programs should not use these sorts of constructs.
That is, unclear parts of the standard should be avoided – not tried to get language lawyered into saying what you want.

And the committee is pragmatic; DR 464 is a case where the defect report asks to add an example for a construct involving the #line directive that some compilers get wrong, but the committee thought it was better to make it unspecified behavior
Investigation during the meeting revealed that several (in fact all that were tested) compilers did not seem to follow the interpretation of the standard as given in N1842, and that it would be best to acknowledge this as unspecified behavior.
So just because the standard says something does not mean that it is the specified behavior. One other fun example of this is DR 476 where the standard does not make sense with respect to the behavior of volatile:
All implementors represented on the committee were polled and all confirmed that indeed, the intent, not the standard, is implemented. In addition to the linux experience documented in the paper, at least two committee members described discussions with systems engineers where this difference between the standard vs the implementation was discussed because the systems engineers clearly depended on the implementation of actual intent. The sense was that this was simply a well known discrepency.