Here is a comparison of run-times of a C-code
that intergrate the complex Ginzburg-Landau equation (CGLe)
using 256 points in space, and 50000*50 steps in time.
the time-interval is 0.01, and CGLe parameters are
,
, is created that contains 50000*256*2 floats, i.e.,
the file size is 50000*256*2*4=102.400.000 bytes.
The source code is always the same, but I varied the compiler and/or the compiler options, as well as the machine and the OS used.
Here are the results:
processor | OS / platform | compiler | options | time (s) |
Celeron 800 (wild) | Linux RH 7.2 | GCC 2.96 (2000/07/31) | -O2 | 333(n) |
PIII 800 (light) | Linux RH 7.2 | GCC 2.96 (2000/07/31) | -O2 | 274(l) |
374(n) | ||||
bi-PIII (zero) | Linux RH 7.2 | GCC 2.96 (2000/07/31) | -O2 | 263(l) |
bi-alpha (hard) | Compaq Tru64 Unix V | Compaq C V6.3-026 | -O2 | 252(n) |
PIII 450 coppermine | Win2k + DJGPP 2.03 | GCC 3.1 | -O2 | |
PIII 450 coppermine | Win2k + Cygwin | GCC 3.1 | -O2 | 478(l) |
PIII 450 coppermine | Win2k + mingw | GCC 3.1 | -O2 |
notes: