BENCHMARK DETAILS

  1. Computer System: Linux Cluster
    1. Vendor: ACT
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 2.2 GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Linux
  2. Code Version: LS-DYNA
  3. Code Version Number: 970_3979
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1711
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: NSC, Linkoping, Sweden
  12. Submitted by: Larsgunnar Nilsson
  13. Submitter Organization: ARUP
  1. Computer System: HP-UX Itanium2 Cluster / GigE
    1. Vendor: HP
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1041
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: HP-UX Itanium2 Cluster / GigE
    1. Vendor: HP
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1534
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: HP-UX Itanium2 Cluster / GigE
    1. Vendor: HP
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2550
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: HP-UX Itanium2 Cluster / GigE
    1. Vendor: HP
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 4846
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: HP-UX Itanium2 Cluster / GigE
    1. Vendor: HP
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 9397
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: HP-UX Itanium2 Cluster / GigE
    1. Vendor: HP
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 17489
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: IBM p655
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: POWER4+, 1.7 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: AIX 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 9701
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Pouphkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM p655
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: POWER4+, 1.7 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 4
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: AIX 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 5210
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM p655
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: POWER4+, 1.7 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 4
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: AIX 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2635
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM p655
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: POWER4+, 1.7 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 4
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: AIX 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1418
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM p655
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: POWER4+, 1.7 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 4
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: AIX 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 885
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM p655
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: POWER4+, 1.7 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 4
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: AIX 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 667
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: Blue Horizon
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: PowerPC_POWER3
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: AIX 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_d_3535
  4. Benchmark problem: neon_refined
  5. Wall clock time: 12285
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Double
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: SDSC
  12. Submitted by: Franck Grignon
  13. Submitter Organization: UCSD
  1. Computer System: Blue Horizon
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: PowerPC_POWER3
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: AIX 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_d_3535
  4. Benchmark problem: neon_refined
  5. Wall clock time: 6610
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Double
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: SDSC
  12. Submitted by: Franck Grignon
  13. Submitter Organization: UCSD
  1. Computer System: Blue Horizon
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: PowerPC_POWER3
    5. Number of nodes: 32
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: AIX 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_d_3535
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3993
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Double
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: SDSC
  12. Submitted by: Franck Grignon
  13. Submitter Organization: UCSD
  1. Computer System: Blue Horizon
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: PowerPC_POWER3
    5. Number of nodes: 64
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: AIX 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_d_3535
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2734
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Double
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: SDSC
  12. Submitted by: Franck Grignon
  13. Submitter Organization: UCSD
  1. Computer System: Blue Horizon
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: PowerPC_POWER3
    5. Number of nodes: 128
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: AIX 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_d_3535
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1997
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Double
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: SDSC
  12. Submitted by: Franck Grignon
  13. Submitter Organization: UCSD
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX 2600
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: HP-UX
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 15365
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 8172
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 4040
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2088
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1198
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 734
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium 2 RX 2600
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 570
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 456
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: IBM x335
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 3.066 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat EL 3.0 WS
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 13298
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, NY, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM x335
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 3.066 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 3.0 WS
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 6931
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, NY, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM x335
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 3.066 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 3.0 WS
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3669
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, NY, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM x335
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 3.066 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 3.0 WS
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1986
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, NY, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM x335
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 3.066 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 3.0 WS
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1161
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, NY, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM x335
    1. Vendor: IBM
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 3.066 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat EL 3.0 WS
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 13433
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, NY, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM x335
    1. Vendor: IBM
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 3.066 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 3.0 WS
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 7004
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, NY, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM x335
    1. Vendor: IBM
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 3.066 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 3.0 WS
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3885
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, NY, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM x335
    1. Vendor: IBM
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 3.066 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 3.0 WS
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2186
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, NY, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: IBM x335
    1. Vendor: IBM
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 3.066 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 3.0 WS
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1422
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, NY, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: Linux Cluster
    1. Vendor: Self-made (SKIF program)
    2. CPU Inerconnects: 3D SCI
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 2.8GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Linux RedHat80
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1150
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: United Institute of Informatics Problems, Minsk
  12. Submitted by: Oleg Tchij
  13. Submitter Organization: United Institute of Informatics Problems, Minsk
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SuSE SLES8 SP3 w Scali MPI
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5412
  4. Benchmark problem: neon_refined
  5. Wall clock time: 7155
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SuSE SLES8 SP3 w MPI/Pro
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5455
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3967
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SuSE SLES8 SP3 w MPI/Pro
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5455
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2196
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SuSE SLES8 SP3 w MPI/Pro
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5455
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1267
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 24
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: SuSE SLES8 SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5455
  4. Benchmark problem: neon_refined
  5. Wall clock time: 969
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 32
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SuSE SLES8 SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5412
  4. Benchmark problem: neon_refined
  5. Wall clock time: 836
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 48
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SuSE SLES8 SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5412
  4. Benchmark problem: neon_refined
  5. Wall clock time: 702
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 64
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SuSE SLES8 SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5412
  4. Benchmark problem: neon_refined
  5. Wall clock time: 584
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: Athlon Cluster
    1. Vendor: Appro
    2. CPU Inerconnects: 100 Mbit ethernet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Athlon MP 2000+
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: RedHat Enterprise Linux 3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 7188
  6. RAM per CPU: 512
  7. RAM Bus Speed: 2100
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Troy, MI USA
  12. Submitted by: Joshua Weage
  13. Submitter Organization: ARUP
  1. Computer System: Athlon Cluster
    1. Vendor: Appro
    2. CPU Inerconnects: 100 Mbit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Athlon MP 2000+
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: RedHat Enterprise Linux 3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 11991
  6. RAM per CPU: 512
  7. RAM Bus Speed: 2100
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Troy, MI USA
  12. Submitted by: Joshua Weage
  13. Submitter Organization: ARUP
  1. Computer System: Athlon Cluster
    1. Vendor: Appro
    2. CPU Inerconnects: 1 Gbit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Athlon MP 2600+
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: White Box Enterprise Linux 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 6173
  6. RAM per CPU: 1024
  7. RAM Bus Speed: 2100
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Troy, MI USA
  12. Submitted by: Joshua Weage
  13. Submitter Organization: ARUP
  1. Computer System: HP zv5000z Pavilion Laptop
    1. Vendor: HP
    2. CPU Inerconnects: HT
    3. MPI Library: Information Not Provided
    4. Processor: 2Ghz Athlon64 Mobile processor
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: SUSE 9.1 Linux with latest updates as of 8/13/200
  2. Code Version: LS-DYNA
  3. Code Version Number: AMD64 mpp5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 14373
  6. RAM per CPU: 1000
  7. RAM Bus Speed: 2000
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, California
  12. Submitted by: Tim Wilkens
  13. Submitter Organization: HP
  1. Computer System: Linux Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Intel
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Linux RedHat 7.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970_3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 15911
  6. RAM per CPU: 1024
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Luleå, Sweden
  12. Submitted by: Frank Englund
  13. Submitter Organization: SSAB HardTech
  1. Computer System: VALUESTAR-TZ
    1. Vendor: NEC
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Athlon64 2.2GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: SuSE Linux Professional 9.1 for AMD64
  2. Code Version: LS-DYNA
  3. Code Version Number: 5434 for AMD64
  4. Benchmark problem: neon_refined
  5. Wall clock time: 12866
  6. RAM per CPU: 1024
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: ACS, Ibaraki, Japan
  12. Submitted by: Yo Yamagata
  13. Submitter Organization: ACS
  1. Computer System: e325 cluster
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron, 2.0 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SuSE SLES-8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 828
  6. RAM per CPU: 6
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e325 cluster
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron, 2.0 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SuSE SLES-8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1224
  6. RAM per CPU: 6
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e325 cluster
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron, 2.0 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SuSE SLES-8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2108
  6. RAM per CPU: 6
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e325 cluster
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron, 2.0 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SuSE SLES-8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3965
  6. RAM per CPU: 6
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e325 cluster
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron, 2.0 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SuSE SLES-8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 7831
  6. RAM per CPU: 6
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e325 cluster
    1. Vendor: IBM
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron, 2.0 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SuSE SLES-8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1141
  6. RAM per CPU: 6
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e325 cluster
    1. Vendor: IBM
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron, 2.0 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SuSE SLES-8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1568
  6. RAM per CPU: 6
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e325 cluster
    1. Vendor: IBM
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron, 2.0 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SuSE SLES-8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2454
  6. RAM per CPU: 6
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e325 cluster
    1. Vendor: IBM
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron, 2.0 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SuSE SLES-8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 4580
  6. RAM per CPU: 6
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e325 cluster
    1. Vendor: IBM
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron, 2.0 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SuSE SLES-8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 7851
  6. RAM per CPU: 6
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York, USA
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: VALUESTAR-TZ
    1. Vendor: NEC
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Lam-6.5.9
    4. Processor: Athlon64 2.2GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SuSE Linux Professional 9.1 for AMD64
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 rev.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 6605
  6. RAM per CPU: 1024
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: ACS, Ibaraki, Japan
  12. Submitted by: Yo Yamagata
  13. Submitter Organization: ACS
  1. Computer System: Xeon Desktop
    1. Vendor: Dell
    2. CPU Inerconnects: GigE
    3. MPI Library: Information Not Provided
    4. Processor: Xeon 3.4GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Windows XP
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2725
  6. RAM per CPU: 1024
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: ShenZhen, China
  12. Submitted by: Li Hui
  13. Submitter Organization: ShenZhen Foxconn Co., China
  1. Computer System: Xeon Desktop
    1. Vendor: Dell
    2. CPU Inerconnects: GigaE
    3. MPI Library: Information Not Provided
    4. Processor: 8
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Windows XP
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2685
  6. RAM per CPU: 1024
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: ShenZhen, China
  12. Submitted by: Li Hui
  13. Submitter Organization: ShenZhen Foxconn Co., China
  1. Computer System: RLX 3000ix ServerBlade
    1. Vendor: RLX
    2. CPU Inerconnects: Voltaire InfiniBand
    3. MPI Library: MVAPICH
    4. Processor: Intel Xeon 3.06GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat Enterprise Linux 3 Update 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1716
  6. RAM per CPU: 2
  7. RAM Bus Speed: 266
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Billerica, MA
  12. Submitted by: Eric Dube
  13. Submitter Organization: Voltaire
  1. Computer System: Opteron Cluster
    1. Vendor: Self-made (SKIF program)
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2,2GHz AMD Opteron 248
    5. Number of nodes: 35
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 35 (Total CPU)
    9. Operating System: SuSE SLES8 SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 591
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: United Institute of Informatics Problems, Minsk
  12. Submitted by: Oleg Tchij
  13. Submitter Organization: United Institute of Informatics Problems, Minsk
  1. Computer System: RLX 3000ix ServerBlade
    1. Vendor: RLX
    2. CPU Inerconnects: Voltaire InfiniBand
    3. MPI Library: MVAPICH
    4. Processor: Intel Xeon 3.06GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat Enterprise Linux 3 Update 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3150
  6. RAM per CPU: 2
  7. RAM Bus Speed: 266
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Billerica, MA
  12. Submitted by: Eric Dube
  13. Submitter Organization: Voltaire
  1. Computer System: RLX 3000ix ServerBlade
    1. Vendor: RLX
    2. CPU Inerconnects: Voltaire InfiniBand
    3. MPI Library: MVAPICH
    4. Processor: Intel Xeon 3.06GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat Enterprise Linux 3 Update 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 5944
  6. RAM per CPU: 2
  7. RAM Bus Speed: 266
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Billerica, MA
  12. Submitted by: Eric Dube
  13. Submitter Organization: Voltaire
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 6902
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3538
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1872
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1101
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 688
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 421
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 519
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: RLX 3000ix ServerBlade
    1. Vendor: RLX
    2. CPU Inerconnects: Voltaire InfiniBand
    3. MPI Library: MVAPICH
    4. Processor: Intel Xeon 3.06GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat Enterprise Linux 3 Update 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 11682
  6. RAM per CPU: 2
  7. RAM Bus Speed: 266
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Billerica, MA
  12. Submitted by: Eric Dube
  13. Submitter Organization: Voltaire
  1. Computer System: Opteron CP4000/InfiniBand Voltaire
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Voltaire
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.2GHz DL145
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 7140
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000/InfiniBand Voltaire
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Voltaire
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.2GHz DL145
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3901
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000/InfiniBand Voltaire
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Voltaire
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.2GHz DL145
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1950
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000/InfiniBand Voltaire
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Voltaire
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.2GHz DL145
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1185
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000/InfiniBand Voltaire
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Voltaire
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.2GHz DL145
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 797
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapic Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 6494
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapic Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 12713
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapic Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3389
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapic Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1795
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapic Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 969
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapic Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 740
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapic Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 607
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapic Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 429
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 455
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 405
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 380
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: HP ProLiant dl360 G3
    1. Vendor: Hewlett-Packard
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: LAM 6.5.9
    4. Processor: Intel Xeon
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat Enterprise Linux 3 WS update 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3210
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: West Sussex, UK
  12. Submitted by: Daniel Challen
  13. Submitter Organization: OCSL
  1. Computer System: AMD64 Cluster
    1. Vendor: Engineering Research Nordic AB
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: LAM MPI
    4. Processor: AMD Athlon 64 3500+ (2.2 GHz)
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Fedora Core 3
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3578
  6. RAM per CPU: 1024
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Linköping, Sweden
  12. Submitted by: Daniel Hilding
  13. Submitter Organization: Engineering Research Nordic AB, www.erab.se
  1. Computer System: AMD64 Cluster
    1. Vendor: Engineering Research Nordic AB
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: LAM MPI
    4. Processor: AMD Athlon 64 3500+ (2.2 GHz)
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Fedora Core 3
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2493
  6. RAM per CPU: 1024
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Linköping, Sweden
  12. Submitted by: Daniel Hilding
  13. Submitter Organization: Engineering Research Nordic AB, www.erab.se
  1. Computer System: AMD64 Cluster
    1. Vendor: Engineering Research Nordic AB
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: LAM MPI
    4. Processor: AMD Athlon 64 3500+ (2.2 GHz)
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: Fedora Core 3
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 12955
  6. RAM per CPU: 1024
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Linköping, Sweden
  12. Submitted by: Daniel Hilding
  13. Submitter Organization: Engineering Research Nordic AB, www.erab.se
  1. Computer System: ClusterOnDemand
    1. Vendor: Tsunamic Technologies Inc.
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron(tm) Processor 246
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Linux
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1697
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: http://www.clusterondemand.com
  12. Submitted by: Dr. Kevin Van Workum
  13. Submitter Organization: Tsunamic Technologies Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 11599
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 5977
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3160
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1647
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 902
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 674
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 541
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 399
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 327
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 48
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 268
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: e326
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: mpich-gm
    4. Processor: Opteron, 2.4 GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE LINUX 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 452
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e326
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: mpich-gm
    4. Processor: Opteron, 2.4 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE LINUX 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 586
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e326
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: mpich-gm
    4. Processor: Opteron, 2.4 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SUSE LINUX 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 911
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e326
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: mpich-gm
    4. Processor: Opteron, 2.4 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SUSE LINUX 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1597
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e326
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: mpich-gm
    4. Processor: Opteron, 2.4 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SUSE LINUX 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3004
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: e326
    1. Vendor: IBM
    2. CPU Inerconnects: Myrinet
    3. MPI Library: mpich-gm
    4. Processor: Opteron, 2.4 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SUSE LINUX 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 5689
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 244
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: Pentium 4
    1. Vendor: Dell
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: P4 - 2.4GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: Windows 2000
  2. Code Version: LS-DYNA
  3. Code Version Number: 970_s_5434
  4. Benchmark problem: neon_refined
  5. Wall clock time: 25996
  6. RAM per CPU: 2048
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Diego, CA
  12. Submitted by: Dustin Boesch
  13. Submitter Organization: Dell
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Myrinet
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.4GHz DL145
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 6084
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Myrinet
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.4GHz DL145
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3254
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Myrinet
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.4GHz DL145
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1740
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Myrinet
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.4GHz DL145
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1018
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Myrinet
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.4GHz DL145
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 655
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: GT4000
    1. Vendor: Galactic Computing (Shenzhen) Ltd.
    2. CPU Inerconnects: Infiniband
    3. MPI Library: MVAPICH 0.9.5
    4. Processor: Xeon 3.6Ghz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Linux
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 310
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Shenzhen, China
  12. Submitted by: Alex Korobka
  13. Submitter Organization: Galactic Computing (Shenzhen) Ltd.
  1. Computer System: GT4000
    1. Vendor: Galactic Computing (Shenzhen) Ltd.
    2. CPU Inerconnects: Infiniband
    3. MPI Library: MVAPICH 0.9.5
    4. Processor: Xeon 3.6Ghz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Linux
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 418
  6. RAM per CPU: 2
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Shenzhen, China
  12. Submitted by: Alex Korobka
  13. Submitter Organization: Galactic Computing (Shenzhen) Ltd.
  1. Computer System: GT4000
    1. Vendor: Galactic Computing (Shenzhen) Ltd.
    2. CPU Inerconnects: Infiniband
    3. MPI Library: MVAPICH 0.9.5
    4. Processor: Xeon 3.6Ghz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Linux
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 699
  6. RAM per CPU: 2
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Shenzhen, China
  12. Submitted by: Alex Korobka
  13. Submitter Organization: Galactic Computing (Shenzhen) Ltd.
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 5443
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2853
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1486
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 822
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 627
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 529
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3516
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1820
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 993
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 569
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 417
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 342
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 280
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 239
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 184
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3126
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1607
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 877
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 527
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 384
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 315
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 48
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 258
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 226
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: Microway Navion
    1. Vendor: PathScale
    2. CPU Inerconnects: PathScale InfiniPath/Silverstorm IB switch
    3. MPI Library: PathScale MPI
    4. Processor: AMD Opteron 2.6 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Fedora Core 3
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 480
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: PathScale Customer Benchmark Center
  12. Submitted by: Kevin Ball
  13. Submitter Organization: PathScale, Inc.
  1. Computer System: Microway Navion
    1. Vendor: PathScale
    2. CPU Inerconnects: PathScale InfiniPath/Silverstorm IB switch
    3. MPI Library: PathScale MPI
    4. Processor: AMD Opteron 2.6GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Fedora Core 3
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 790
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: PathScale Customer Benchmark Center
  12. Submitted by: Kevin Ball
  13. Submitter Organization: PathScale, Inc.
  1. Computer System: Microway Navion
    1. Vendor: PathScale
    2. CPU Inerconnects: PathScale InfiniPath/Silverstorm IB switch
    3. MPI Library: PathScale MPI
    4. Processor: AMD Opteron 2.6 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Fedora Core 3
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1477
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: PathScale Customer Benchmark Center
  12. Submitted by: Kevin Ball
  13. Submitter Organization: PathScale, Inc.
  1. Computer System: Microway Navion
    1. Vendor: PathScale
    2. CPU Inerconnects: PathScale InfiniPath/Silverstorm IB switch
    3. MPI Library: PathScale MPI
    4. Processor: AMD Opteron 2.6 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Fedora Core 3
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2802
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: PathScale Customer Benchmark Center
  12. Submitted by: Kevin Ball
  13. Submitter Organization: PathScale, Inc.
  1. Computer System: Microway Navion
    1. Vendor: PathScale
    2. CPU Inerconnects: PathScale InfiniPath/Silverstorm IB switch
    3. MPI Library: PathScale MPI
    4. Processor: AMD Opteron 2.6 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Fedora Core 3
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 5285
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: PathScale Customer Benchmark Center
  12. Submitted by: Kevin Ball
  13. Submitter Organization: PathScale, Inc.
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 394
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 32
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 373
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 48
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 294
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 268
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 64
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 258
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 239
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: IBM P690
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Power 4 1.3 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 32
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: AIX 5
  2. Code Version: LS-DYNA
  3. Code Version Number: LS970-3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1469
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Oak Ridge, TN
  12. Submitted by: Gustavo A Aramayo
  13. Submitter Organization: Oak Ridge National Laboratory
  1. Computer System: IBMP690
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Power 4 1.3 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 32
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: AIX 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: LS970-3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 945
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Oak Ridge, TN
  12. Submitted by: Gustavo A Aramayo
  13. Submitter Organization: Oak Ridge National Laboratory
  1. Computer System: IBM P690
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Power 4 1.3 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 16
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: AIX 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: LS970-3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2072
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Oak Ridge, TN
  12. Submitted by: Gustavo A Aramayo
  13. Submitter Organization: Oak Ridge National Laboratory
  1. Computer System: IBM P690
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Power 4 1.3 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 32
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: AIX 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: LS970-3858
  4. Benchmark problem: neon_refined
  5. Wall clock time: 706
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Oak Ridge, TN
  12. Submitted by: Gustavo A. Aramayo
  13. Submitter Organization: IBM
  1. Computer System: 1122Hi-81
    1. Vendor: Appro
    2. CPU Inerconnects: Level 5 Networks - 1 Gb Ethernet NIC
    3. MPI Library: Scali v4.4.2
    4. Processor: Opteron 275
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SuSE SLES 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: neon_refined
  5. Wall clock time: 492
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: John Marcolini
  13. Submitter Organization: Level 5 Networks
  1. Computer System: IBM Blade Center 8843-25V
    1. Vendor: IBM
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Xeon 3.2G
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: Windows XP and Windows 2003 server
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 25106
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Double
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Taiwan
  12. Submitted by: Renjay
  13. Submitter Organization: IBM
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Topspin)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 433
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menl Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Topspin)
    3. MPI Library: MVAPICH
    4. Processor: AMD Oopteron 156 3.0 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 703
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Topspin)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1283
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Topspin)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2446
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Menlo Park
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Topspin)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 4629
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Topspin)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 9222
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 2.0.1
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 289
  6. RAM per CPU: 2000000000
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Intel
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 2.0.1
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 451
  6. RAM per CPU: 2000000000
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Intel
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 2.0.1
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 803
  6. RAM per CPU: 2000000000
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Intel
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 2.0.1
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1548
  6. RAM per CPU: 2000000000
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Intel
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Voltaire)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 9462
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Voltaire)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 4761
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Voltaire)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2551
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Voltaire)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1321
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Voltaire)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 731
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Relion 1600
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Silverstorm Infiniband
    3. MPI Library: MPICH 1.2.5
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Scyld ClusterWare 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: neon_refined
  5. Wall clock time: 894
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: Relion 1600
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Silverstorm Infiniband
    3. MPI Library: MPICH 1.2.5
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Scyld ClusterWare 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1662
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: Relion 1600
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Silverstorm Infiniband
    3. MPI Library: MPICH 1.2.5
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Scyld ClusterWare 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2886
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: Relion 1600
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MPICH 1.2.5
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Scyld ClusterWare 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3276
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: Relion 1600
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MPICH 1.2.5
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Scyld ClusterWare 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1776
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: Relion 1600
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MPICH 1.2.5
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Scyld ClusterWare 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1090
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: CP4000/Windows CCS
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire InfiniBand SDR
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.2 GHz DL145
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 11901
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP4000/Windows CCS
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire InfiniBand SDR
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.2 GHz DL145
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3891
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP4000/Windows CCS
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire InfiniBand SDR
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.2 GHz DL145
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2605
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP4000/Windows CCS
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire InfiniBand SDR
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.2 GHz DL145
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1377
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP4000/Windows CCS
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire InfiniBand SDR
    3. MPI Library: Information Not Provided
    4. Processor: AMD Dualcore Opteron 2.2 GHz DL145
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 810
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP4000/Windows CCS
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire InfiniBand SDR
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.2 GHz DL145
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 579
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Altix XE1200 Compute Cluster
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire IB 410 HCA PCIe card with firmware v1.2.0 Voltaire ISR9024 SDR 24 port IB switch
    3. MPI Library: Intel MPI Runtime v2
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 353
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Falls, WI
  12. Submitted by: Scott Shaw
  13. Submitter Organization: Application Engineering
  1. Computer System: Altix XE1200 Compute Cluster
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire IB 410 HCA PCIe card with firmware v1.2.0 Voltaire ISR9024 SDR 24 port IB switch
    3. MPI Library: Intel MPI Runtime v2
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 452
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Falls, WI
  12. Submitted by: Scott Shaw
  13. Submitter Organization: Application Engineering
  1. Computer System: Altix XE1200 Compute Cluster
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire IB 410 HCA PCIe card with firmware v1.2.0 Voltaire ISR9024 SDR 24 port IB switch
    3. MPI Library: Intel MPI Runtime v2
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 6
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 587
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Falls, WI
  12. Submitted by: Scott Shaw
  13. Submitter Organization: Application Engineering
  1. Computer System: Altix XE1200 Compute Cluster
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire IB 410 HCA PCIe card with firmware v1.2.0 Voltaire ISR9024 SDR 24 port IB switch
    3. MPI Library: Intel MPI Runtime v2
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 779
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Falls, WI
  12. Submitted by: Scott Shaw
  13. Submitter Organization: Application Engineering
  1. Computer System: Altix XE1200 Compute Cluster
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire IB 410 HCA PCIe card with firmware v1.2.0 Voltaire ISR9024 SDR 24 port IB switch
    3. MPI Library: Intel MPI Runtime v2
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1436
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Falls, WI
  12. Submitted by: Scott Shaw
  13. Submitter Organization: Application Engineering
  1. Computer System: Altix XE1200 Compute Cluster
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire IB 410 HCA PCIe card with firmware v1.2.0 Voltaire ISR9024 SDR 24 port IB switch
    3. MPI Library: Intel MPI Runtime v2
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2075
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Falls, WI
  12. Submitted by: Scott Shaw
  13. Submitter Organization: Application Engineering
  1. Computer System: Altix XE1200 Compute Cluster
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire IB 410 HCA PCIe card with firmware v1.2.0 Voltaire ISR9024 SDR 24 port IB switch
    3. MPI Library: Intel MPI Runtime v2
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3916
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Falls, WI
  12. Submitted by: Scott Shaw
  13. Submitter Organization: Application Engineering
  1. Computer System: Altix XE1200 Compute Cluster
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire IB 410 HCA PCIe card with firmware v1.2.0 Voltaire ISR9024 SDR 24 port IB switch
    3. MPI Library: Intel MPI Runtime v2
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 4200
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Falls, WI
  12. Submitted by: Scott Shaw
  13. Submitter Organization: Application Engineering
  1. Computer System: Altix XE1200 Compute Cluster
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire IB 410 HCA PCIe card with firmware v1.2.0 Voltaire ISR9024 SDR 24 port IB switch
    3. MPI Library: Intel MPI Runtime v2
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 5405
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Falls, WI
  12. Submitted by: Scott Shaw
  13. Submitter Organization: Application Engineering
  1. Computer System: Altix XE1200 Compute Cluster
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire IB 410 HCA PCIe card with firmware v1.2.0 Voltaire ISR9024 SDR 24 port IB switch
    3. MPI Library: Intel MPI Runtime v2
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2803
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Falls, WI
  12. Submitted by: Scott Shaw
  13. Submitter Organization: Application Engineering
  1. Computer System: Altix XE1200 Compute Cluster
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire IB 410 HCA PCIe card with firmware v1.2.0 Voltaire ISR9024 SDR 24 port IB switch
    3. MPI Library: Intel MPI Runtime v2
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763
  4. Benchmark problem: neon_refined
  5. Wall clock time: 7884
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Application Engineering
  1. Computer System: Altix350
    1. Vendor: SGI
    2. CPU Inerconnects: NUMALINK
    3. MPI Library: Information Not Provided
    4. Processor: Itanium 2 1.4GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: Linux 2.4.21
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763.169
  4. Benchmark problem: neon_refined
  5. Wall clock time: 16475
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Rome
  12. Submitted by: Cristiano Sciaboni
  13. Submitter Organization: Centro Sviluppo Materiali
  1. Computer System: HP DL145
    1. Vendor: NetEffect
    2. CPU Inerconnects: NetEffect NE010 10 GbE iWARP
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971s.7600
  4. Benchmark problem: neon_refined
  5. Wall clock time: 897
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: NetEffect Benchmark Center
  12. Submitted by: Kris Meier
  13. Submitter Organization: NetEffect
  1. Computer System: Workstation Celsius V830
    1. Vendor: Fujitsu Siemens
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Opteron 250 2400MHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Win Xp 64
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined
  5. Wall clock time: 7734
  6. RAM per CPU: 3
  7. RAM Bus Speed: 200
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: Italy
  12. Submitted by: Rosario Dotoli
  13. Submitter Organization: CETMA Consortium
  1. Computer System: CA160ⅡT
    1. Vendor: ARD
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Dual-core Intel® Core?2 Extreme 2.93GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Windows
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined
  5. Wall clock time: 3897
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Xeon X5355
    1. Vendor: Linux Networx
    2. CPU Inerconnects: Shared memory
    3. MPI Library: Scali MPI Connect 5.
    4. Processor: Intel(r) Quad Core 2.66Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SLES 9
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: neon_refined
  5. Wall clock time: 2151
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Bluffdale, UT
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: NEXXUS 4080ML
    1. Vendor: VXTECH
    2. CPU Inerconnects: Infiniband SDR
    3. MPI Library: OpenMPI 1.2.5 Xeon64
    4. Processor: Xeon 3130 3GHz Dual Core
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Linux Red Hat 4 upd 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR3.2.1
  4. Benchmark problem: neon_refined
  5. Wall clock time: 581
  6. RAM per CPU: 8
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: St-Laurent, CANADA
  12. Submitted by: David Giorgi
  13. Submitter Organization: VXTECH
  1. Computer System: CA9212i
    1. Vendor: ARD
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Intel i7 920
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Cent OS 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined
  5. Wall clock time: 1423
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Shared
  11. Location: Nagoya,Japan
  12. Submitted by: Takuya Ichikawa
  13. Submitter Organization: ARD
  1. Computer System: CA9212i
    1. Vendor: ARD
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 920
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Cent OS 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined
  5. Wall clock time: 734
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Nagoya,Japan
  12. Submitted by: Takuya Ichikawa
  13. Submitter Organization: ARD
  1. Computer System: Z800
    1. Vendor: HP
    2. CPU Inerconnects: Gigabit
    3. MPI Library: openmpi 1-4-1
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Linux CentOS 5.4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R4.2.1
  4. Benchmark problem: neon_refined
  5. Wall clock time: 982
  6. RAM per CPU: 2
  7. RAM Bus Speed: 13333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Contern-Luxembourg
  12. Submitted by: Edmund Marx
  13. Submitter Organization: IEE S.A.
  1. Computer System: Z800
    1. Vendor: HP
    2. CPU Inerconnects: Gigabit
    3. MPI Library: openmpi 1-4-1
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Linux CentOS 5.4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R4.2.1
  4. Benchmark problem: neon_refined
  5. Wall clock time: 982
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Contern-Luxembourg
  12. Submitted by: Edmund Marx
  13. Submitter Organization: IEE S.A.
  1. Computer System: Cisco UCS C460 M1
    1. Vendor: Cisco Systems
    2. CPU Inerconnects: QPI
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon X7560
    5. Number of nodes: 1
    6. Processors/Nodes: 4
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Fedora Core 12
  2. Code Version: LS-DYNA
  3. Code Version Number: R.3.2.1
  4. Benchmark problem: neon_refined
  5. Wall clock time: 355
  6. RAM per CPU: 16
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: San Jose
  12. Submitted by: Ven Immani
  13. Submitter Organization: Technical Marketing
  1. Computer System: Ci11T
    1. Vendor: ARD
    2. CPU Inerconnects: QDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 2700K
    5. Number of nodes: 3
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 6 (Total CPU)
    9. Operating System: Cent OS 5.5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R4.2.1
  4. Benchmark problem: neon_refined
  5. Wall clock time: 720
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: NAGOYA,JAPAN
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Ci11T
    1. Vendor: ARD
    2. CPU Inerconnects: QDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 2700K
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Cent OS 5.5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R4.2.1
  4. Benchmark problem: neon_refined
  5. Wall clock time: 547
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: NAGOYA,JAPAN
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Ci11T
    1. Vendor: ARD
    2. CPU Inerconnects: QDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 2700K
    5. Number of nodes: 6
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: Cent OS 5.5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R4.2.1
  4. Benchmark problem: neon_refined
  5. Wall clock time: 395
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: NAGOYA,JAPAN
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: IBM
    1. Vendor: IBM
    2. CPU Inerconnects: ib fdr
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2650v2 @ 2.6GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined
  5. Wall clock time: 213
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1866
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: India
  12. Submitted by: VIVEK KAPSE
  13. Submitter Organization: LEON