BENCHMARK DETAILS

  1. Computer System: Linux cluster
    1. Vendor: AMD
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron 244
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SuSE SLES 8.1
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 3858 SP
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 35913
  6. RAM per CPU: 1024
  7. RAM Bus Speed: 1800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Saint-Petersburg State Polytechnical University, R
  12. Submitted by: Nikolay Shabrov
  13. Submitter Organization: Saint-Petersburg State Polytechnical University, R
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 197422
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 100938
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 51250
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 26778
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 14182
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: HP-UX Itanium2 Cluster / GigE
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9916
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 Cluster
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 1.5 GHz Itanium2 RX2600
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 8106
  6. RAM per CPU: 6132
  7. RAM Bus Speed: 8500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: HP, Richardson, Texas
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Linux Cluster
    1. Vendor: Self-made
    2. CPU Inerconnects: 3D SCI
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dual Xeon 2.8GHz
    5. Number of nodes: 128
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Linux RedHat80
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.3858
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9989
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: United Institute of Informatics Problems, Minsk, B
  12. Submitted by: Oleg Tchij
  13. Submitter Organization: United Institute of Informatics Problems, Minsk
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 64
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SuSE SLES8 SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5412
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 7238
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 32
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SuSE SLES8 SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5412
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 11233
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SuSE SLES8 SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5412
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 19268
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: Opteron Cluster
    1. Vendor: Appro, Rackable, and Verari
    2. CPU Inerconnects: InfiniCon Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2 GHz Opteron
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SuSE SLES8 SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5412
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 33088
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Steven Lyness
  13. Submitter Organization: Appro, Rackable, and Verari
  1. Computer System: GC4848-K8QS
    1. Vendor: Gridcore
    2. CPU Inerconnects: Hyper Transport
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron 848
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SuSE Linux Enterprise Server 9
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 56032
  6. RAM per CPU: 4096
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Göteborg
  12. Submitted by: Peter Rundberg
  13. Submitter Organization: Gridcore
  1. Computer System: Opteron Cluster
    1. Vendor: Self-made (SKIF program)
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2,2GHz AMD Opteron 248
    5. Number of nodes: 18
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 18 (Total CPU)
    9. Operating System: SuSE SLES8 SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 12505
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: United Institute of Informatics Problems, Minsk
  12. Submitted by: Oleg Tchij
  13. Submitter Organization: United Institute of Informatics Problems, Minsk
  1. Computer System: VALUESTAR-TZ
    1. Vendor: NEC
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Lam-6.5.9
    4. Processor: Athlon64 2.2GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SuSE Linux Professional 9.1 for AMD64
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 rev.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 100220
  6. RAM per CPU: 1024
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: ACS, Ibaraki, Japan
  12. Submitted by: Yo Yamagata
  13. Submitter Organization: ACS
  1. Computer System: RLX 3000ix ServerBlade
    1. Vendor: RLX
    2. CPU Inerconnects: Voltaire InfiniBand
    3. MPI Library: MVAPICH
    4. Processor: Intel Xeon 3.06GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat Enterprise Linux 3 Update 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 24029
  6. RAM per CPU: 2
  7. RAM Bus Speed: 266
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Billerica, MA
  12. Submitted by: Eric Dube
  13. Submitter Organization: Voltaire
  1. Computer System: Opteron Cluster
    1. Vendor: Self-made (SKIF program)
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Information Not Provided
    4. Processor: 2,2GHz AMD Opteron 248
    5. Number of nodes: 35
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 35 (Total CPU)
    9. Operating System: SuSE SLES8 SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6958
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: United Institute of Informatics Problems, Minsk
  12. Submitted by: Oleg Tchij
  13. Submitter Organization: United Institute of Informatics Problems, Minsk
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 108156
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 55547
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 27730
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 14737
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 8189
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6119
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Itanium2 CP6000
    1. Vendor: HP
    2. CPU Inerconnects: Infiniband TopSpin
    3. MPI Library: HP-MPI
    4. Processor: 1.5GHz Itanium2 rx2600
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: HP-UX 11.23
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5152
  6. RAM per CPU: 6
  7. RAM Bus Speed: 8
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: RLX 3000ix ServerBlade
    1. Vendor: RLX
    2. CPU Inerconnects: Voltaire InfiniBand
    3. MPI Library: MVAPICH
    4. Processor: Intel Xeon 3.06GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat Enterprise Linux 3 Update 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 44046
  6. RAM per CPU: 2
  7. RAM Bus Speed: 266
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Billerica, MA
  12. Submitted by: Eric Dube
  13. Submitter Organization: Voltaire
  1. Computer System: RLX 3000ix ServerBlade
    1. Vendor: RLX
    2. CPU Inerconnects: Voltaire InfiniBand
    3. MPI Library: MVAPICH
    4. Processor: Intel Xeon 3.06GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat Enterprise Linux 3 Update 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 84576
  6. RAM per CPU: 2
  7. RAM Bus Speed: 266
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Billerica, MA
  12. Submitted by: Eric Dube
  13. Submitter Organization: Voltaire
  1. Computer System: RLX 3000ix ServerBlade
    1. Vendor: RLX
    2. CPU Inerconnects: Voltaire InfiniBand
    3. MPI Library: MVAPICH
    4. Processor: Intel Xeon 3.06GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat Enterprise Linux 3 Update 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 156122
  6. RAM per CPU: 2
  7. RAM Bus Speed: 266
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Billerica, MA
  12. Submitted by: Eric Dube
  13. Submitter Organization: Voltaire
  1. Computer System: Opteron CP4000/InfiniBand Voltaire
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Voltaire
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.2GHz DL145
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 104384
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000/InfiniBand Voltaire
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Voltaire
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.2GHz DL145
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 54394
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000/InfiniBand Voltaire
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Voltaire
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.2GHz DL145
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 28756
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000/InfiniBand Voltaire
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Voltaire
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.2GHz DL145
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 16290
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000/InfiniBand Voltaire
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Voltaire
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.2GHz DL145
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9650
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: ProLiant DL360 G3
    1. Vendor: HP
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: LAM/MPI 6.5.6-8
    4. Processor: Intel(R) Xeon(TM) CPU 3.06GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: RedHat Linux 8.0
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.3858
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 40548
  6. RAM per CPU: 4096
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: United Kingdom
  12. Submitted by: Daniel Challen
  13. Submitter Organization: OCSL
  1. Computer System: CRAY XD1
    1. Vendor: Cray
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: Cray XD1 MPI
    4. Processor: AMD Opteron 2.2 HGZ
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9738
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: Cray
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: Cray XD1 MPI
    4. Processor: AMD Opteron 2.2 HGZ
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 7504
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: Cray
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: Cray XD1 MPI
    4. Processor: AMD Opteron 2.2 HGZ
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 13543
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: Cray
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: Cray XD1 MPI
    4. Processor: AMD Opteron 2.2 HGZ
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5581
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 47611
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 24681
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.2 GHZ
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Suse Linux 8
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4619
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 12576
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6878
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3946
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Rapid Array
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 48
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3020
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-TIng Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 60
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 120 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2603
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4972
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9034
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 23602
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 46030
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 92228
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Opteron 2.4 GHZ
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2462
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: proliant dl360/G4
    1. Vendor: HP
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: MPICH-GM GCC 3.2
    4. Processor: 3.4 GHz EM64T
    5. Number of nodes: 40
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 80 (Total CPU)
    9. Operating System: RedHat Enterprise Workstation for x86_64
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4763
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: SNL/NM
  12. Submitted by: J. Dike
  13. Submitter Organization: HP
  1. Computer System: Altix3700/BX2
    1. Vendor: SGI
    2. CPU Inerconnects: NUMALINK
    3. MPI Library: SGI MPT 1.11
    4. Processor: Intel/Itanium 2 1.6GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 16
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Linux64/SGI ProPack3.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 12388
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View
  12. Submitted by: Nick Meng
  13. Submitter Organization: SGI
  1. Computer System: Altix3700/BX2
    1. Vendor: SGI
    2. CPU Inerconnects: NUMALINK
    3. MPI Library: SGI MPT 1.11
    4. Processor: Intel/Itanium 2 1.6 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 24
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: Linux64/SGI ProPack3.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 8864
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View
  12. Submitted by: Nick Meng
  13. Submitter Organization: SGI
  1. Computer System: Altix3700/Bx2
    1. Vendor: SGI
    2. CPU Inerconnects: NUMALINK
    3. MPI Library: SGI MPT 1.11
    4. Processor: Intel/Itanium 2 1.6 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 32
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Linux64/SGI ProPack 3.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6672
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View
  12. Submitted by: Nick Meng
  13. Submitter Organization: SGI
  1. Computer System: Altix3700/BX2
    1. Vendor: SGI
    2. CPU Inerconnects: NUMALINK
    3. MPI Library: SGI MPT 1.11
    4. Processor: Intel/Itanium 2 1.6 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 48
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: Linux64/SGI ProPack 3.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4671
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View
  12. Submitted by: Nick Meng
  13. Submitter Organization: SGI
  1. Computer System: Altix3700/BX2
    1. Vendor: SGI
    2. CPU Inerconnects: NUMALINK
    3. MPI Library: SGI MPT 1.11
    4. Processor: Intel/Itanium 2 1.6 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 64
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Linux64/SGI ProPack3.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3572
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View
  12. Submitted by: Nick Meng
  13. Submitter Organization: SGI
  1. Computer System: Altix3700/BX2
    1. Vendor: SGI
    2. CPU Inerconnects: NUMALINK
    3. MPI Library: SGI MPT 1.11
    4. Processor: Intel/Itanium 2 1.6 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 96
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: Linux64/SGI ProPack3.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2951
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View
  12. Submitted by: Nick Meng
  13. Submitter Organization: SGI
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Myrinet
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.4GHz DL145
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 90271
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Myrinet
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.4GHz DL145
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 45882
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Myrinet
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.4GHz DL145
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 24131
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Myrinet
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.4GHz DL145
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 13831
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Myrinet
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.4GHz DL145
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 8024
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: GT4000
    1. Vendor: Galactic Computing (Shenzhen) Ltd.
    2. CPU Inerconnects: Infiniband
    3. MPI Library: MPICH VMI-2.1
    4. Processor: Xeon 3.6Ghz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Linux
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3074
  6. RAM per CPU: 2
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Shenzhen, China
  12. Submitted by: Alex Korobka
  13. Submitter Organization: Galactic Computing (Shenzhen) Ltd.
  1. Computer System: GT4000
    1. Vendor: Galactic Computing (Shenzhen) Ltd.
    2. CPU Inerconnects: Infiniband
    3. MPI Library: MPICH VMI-2.1
    4. Processor: Xeon 3.6Ghz
    5. Number of nodes: 64
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Linux
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4176
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Shenzhen, China
  12. Submitted by: Alex Korobka
  13. Submitter Organization: Galactic Computing (Shenzhen) Ltd.
  1. Computer System: GT4000
    1. Vendor: Galactic Computing (Shenzhen) Ltd.
    2. CPU Inerconnects: Infiniband
    3. MPI Library: MPICH VMI-2.1
    4. Processor: Xeon 3.6Ghz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Linux
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4732
  6. RAM per CPU: 2
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Shenzhen, China
  12. Submitted by: Alex Korobka
  13. Submitter Organization: Galactic Computing (Shenzhen) Ltd.
  1. Computer System: GT4000
    1. Vendor: Galactic Computing (Shenzhen) Ltd.
    2. CPU Inerconnects: Infiniband
    3. MPI Library: MPICH VMI-2.1
    4. Processor: Xeon 3.6Ghz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Linux
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 8433
  6. RAM per CPU: 2
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Shenzhen, China
  12. Submitted by: Alex Korobka
  13. Submitter Organization: Galactic Computing (Shenzhen) Ltd.
  1. Computer System: eserver p5 575
    1. Vendor: IBM
    2. CPU Inerconnects: eserver HPS
    3. MPI Library: POE
    4. Processor: POWER5, 1.9 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 8
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: AIX 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 24496
  6. RAM per CPU: 32
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: eserver p5 575
    1. Vendor: IBM
    2. CPU Inerconnects: eserver HPS
    3. MPI Library: POE
    4. Processor: POWER5, 1.9 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 8
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: AIX 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6379
  6. RAM per CPU: 32
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: eserver p5 575
    1. Vendor: IBM
    2. CPU Inerconnects: eserver HPS
    3. MPI Library: POE
    4. Processor: POWER5, 1.9 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 8
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: AIX 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 12386
  6. RAM per CPU: 32
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: eserver p5 575
    1. Vendor: IBM
    2. CPU Inerconnects: eserver HPS
    3. MPI Library: POE
    4. Processor: POWER5, 1.9 GHz
    5. Number of nodes: 6
    6. Processors/Nodes: 8
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: AIX 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4535
  6. RAM per CPU: 32
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: eserver p5 575
    1. Vendor: IBM
    2. CPU Inerconnects: eserver HPS
    3. MPI Library: POE
    4. Processor: POWER5, 1.9 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 8
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: AIX 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3638
  6. RAM per CPU: 32
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: eserver p5 575
    1. Vendor: IBM
    2. CPU Inerconnects: eserver HPS
    3. MPI Library: POE
    4. Processor: POWER5, 1.9 GHz
    5. Number of nodes: 12
    6. Processors/Nodes: 8
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: AIX 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2947
  6. RAM per CPU: 32
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: eserver p5 575
    1. Vendor: IBM
    2. CPU Inerconnects: eserver HPS
    3. MPI Library: POE
    4. Processor: POWER5, 1.9 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 8
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: AIX 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2444
  6. RAM per CPU: 32
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: eserver p5 575
    1. Vendor: IBM
    2. CPU Inerconnects: eserver HPS
    3. MPI Library: POE
    4. Processor: POWER5, 1.9 GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 8
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: AIX 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2058
  6. RAM per CPU: 32
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie, New York
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 81810
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 40390
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 21014
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 11566
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 8131
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Topspin InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Opteron 2.6 GHz DL145
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat 3.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6347
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1696
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2416
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2981
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3846
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5226
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 7591
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 14078
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 26230
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 49460
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 43615
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 22475
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 12222
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6652
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4586
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3393
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 48
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2654
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XD1
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: RapidArray
    3. MPI Library: CRAY XD1 MPI
    4. Processor: AMD Dual Core Opteron 2.2 GHZ
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SuSE Linux 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2135
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1 kernel)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4296
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 32
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4167
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3216
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 48
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3092
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2655
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: Emerald
    1. Vendor: Rackable Systems
    2. CPU Inerconnects: PathScale InfiniPath / Silverstorm InfiniBand switch
    3. MPI Library: PathScale MPICH
    4. Processor: AMD Dual-Core Opteron Model 275 (2.2GHz)
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: ROCKS 4.0.0 (RHEL4 U1)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1988
  6. RAM per CPU: 4
  7. RAM Bus Speed: 400
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: AMD Developer Center
  12. Submitted by: Justin Boggs
  13. Submitter Organization: AMD
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire
    3. MPI Library: HP-MPI
    4. Processor: Dual Core Opteron 2.2 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat 4.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 99578
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire
    3. MPI Library: HP-MPI
    4. Processor: Dual Core Opteron 2.2 GHz DL145
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat 4.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 51388
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire
    3. MPI Library: HP-MPI
    4. Processor: Dual Core Opteron 2.2 GHz DL145
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat 4.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 26704
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire
    3. MPI Library: HP-MPI
    4. Processor: Dual Core Opteron 2.2 GHz DL145
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat 4.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 14214
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire
    3. MPI Library: HP-MPI
    4. Processor: Dual Core Opteron 2.2 GHz DL145
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat 4.0
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.5434a
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 8042
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: 1122Hi-81
    1. Vendor: Appro
    2. CPU Inerconnects: Level 5 Networks - 1 Gb Ethernet NIC
    3. MPI Library: Scali v4.4.2
    4. Processor: Opteron 275
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SuSE SLES 9
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5356
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: John Marcolini
  13. Submitter Organization: Level 5 Networks
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Topspin)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 69984
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Topspin)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 34928
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Topspin)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 18044
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Topspin)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 10987
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Topspin)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 31
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 31 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5680
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: S5000XAL
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 2.0.1
    4. Processor: Intel® Xeon? Dual Core 5160 EM64T
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: CentOS 4.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970.6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 40350
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dupont, Washington
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Intel
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 2.0.1
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970.6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3402
  6. RAM per CPU: 2000000000
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Intel
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 2.0.1
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970.6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5955
  6. RAM per CPU: 2000000000
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Intel
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 2.0.1
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970.6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 11408
  6. RAM per CPU: 2000000000
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Intel
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 2.0.1
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970.6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 23418
  6. RAM per CPU: 2000000000
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Intel
  1. Computer System: Altus 1300
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Infiniband
    3. MPI Library: MPICH 1.2.5
    4. Processor: AMD Opteron 275 (2.2 GHz)
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Scyld ClusterWare
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.6478
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6380
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Voltaire)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 71881
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Voltaire)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 35800
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Voltaire)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 17631
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menl Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Voltaire)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 10220
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Sun Fire X2100
    1. Vendor: Sun Microsystems
    2. CPU Inerconnects: Infiniband (Voltaire)
    3. MPI Library: MVAPICH
    4. Processor: AMD Opteron 156 3.0 GHz
    5. Number of nodes: 31
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 31 (Total CPU)
    9. Operating System: 64-bit SUSE SLES 9 SP 3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970_s_6763
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5712
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Menlo Park CA
  12. Submitted by: Mike Burke
  13. Submitter Organization: Sun Microsystems
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.6 GHz DL145
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 42852
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.6 GHz DL145
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 22259
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.6 GHz DL145
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 12031
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.6 GHz DL145
    5. Number of nodes: 6
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 8451
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.6 GHz DL145
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6902
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.6 GHz DL145
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5038
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.6 GHz DL145
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4236
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Opteron CP4000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: AMD Dualcore Opteron 2.6 GHz DL145
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3257
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: IBM x3650
    1. Vendor: IBM
    2. CPU Inerconnects: Shared memory
    3. MPI Library: Intel MPI 2.0.1
    4. Processor: Intel Xeon X5355
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat Enterprise Linux AS release 4 (Nahant Upda
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97_6763.347
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 34010
  6. RAM per CPU: 6000
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: Durham, North Carolina
  12. Submitted by: Charlie Eison
  13. Submitter Organization: IBM
  1. Computer System: Relion 1600
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Silverstorm Infiniband
    3. MPI Library: MPICH 1.2.5
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Scyld ClusterWare 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 11351
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: Relion 1600
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Silverstorm Infiniband
    3. MPI Library: MPICH 1.2.5
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Scyld ClusterWare 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 16602
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: Relion 1600
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Silverstorm Infiniband
    3. MPI Library: MPICH 1.2.5
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Scyld ClusterWare 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 42656
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: Relion 1600
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MPICH 1.2.5
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Scyld ClusterWare 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 23705
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: Relion 1600
    1. Vendor: Penguin Computing
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MPICH 1.2.5
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Scyld ClusterWare 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 13866
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Francisco, CA, USA
  12. Submitted by: Joshua Bernstein
  13. Submitter Organization: Penguin Computing
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 39619
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 20374
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 10936
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5905
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4118
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 6
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 8238
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4922
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3502
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3210
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: NEXXUS 4080PT
    1. Vendor: Ciara Technologies/VXTECH
    2. CPU Inerconnects: InfiniBand SDR
    3. MPI Library: HP-MPI
    4. Processor: Intel® Core?2 Duo Processors 2.66 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 10459
  6. RAM per CPU: 2048
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Burex Kojimachi 8F 3-5-2 Kojimachi, Chiyoda-ku, To
  12. Submitted by: Takahiko Tomuro
  13. Submitter Organization: Scalable Systems., Co. Ltd.
  1. Computer System: POWER EDGE 1950
    1. Vendor: DELL
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MSMPI
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 12048
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Valenciennes - France
  12. Submitted by: Arnaud RINGEVAL
  13. Submitter Organization: CIMES
  1. Computer System: Power Edge 1950
    1. Vendor: DELL
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MSMPI
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 32500
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Valenciennes - FRANCE
  12. Submitted by: Arnaud RINGEVAL
  13. Submitter Organization: CIMES
  1. Computer System: Power Edge 1950
    1. Vendor: DELL
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MSMPI
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 18584
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Valenciennes - France
  12. Submitted by: Arnaud RINGEVAL
  13. Submitter Organization: CIMES
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1886
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2398
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 48
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1937
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 24
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2651
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Power Edge 1950
    1. Vendor: DELL
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MSMPI
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 29935
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France - 59300 Valenciennes
  12. Submitted by: Arnaud RINGEVAL
  13. Submitter Organization: CIMES
  1. Computer System: Power Edge 1950
    1. Vendor: DELL
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MSMPI
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 22184
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France - 59300 Valenciennes
  12. Submitted by: Arnaud RINGEVAL
  13. Submitter Organization: CIMES
  1. Computer System: Clovertown Blades
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband DDR, OFED 1.2
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon Clovertown 2.66GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 32578
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Mellanox Technolgies, Inc./Scali, Inc.
  1. Computer System: CA160ⅡT
    1. Vendor: ARD
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Dual-core Intel® Core?2 Extreme 2.93GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Windows
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 51446
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.8 GHZ
    5. Number of nodes: 256
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: CNL
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1373
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.8 GHZ
    5. Number of nodes: 128
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: CNL
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1592
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.8 GHZ
    5. Number of nodes: 64
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: CNL
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2008
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.8 GHZ
    5. Number of nodes: 48
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: CNL
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2446
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.8 GHZ
    5. Number of nodes: 32
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: CNL
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3275
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.8 GHZ
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: CNL
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 10810
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.8 GHZ
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: CNL
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 19329
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CE6854
    1. Vendor: ARD
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Dual-core Intel® Core?2 Duo 3.00GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: CentOS 5.0
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 14940
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: HS21XM BladeCenter
    1. Vendor: IBM
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: RHEL 4.0 UPDATE 4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_7600.2.1116
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 39416
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: HS21XM BladeCenter
    1. Vendor: IBM
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: RHEL 4.0 UPDATE 4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_7600.2.1116
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 20237
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: HS21XM BladeCenter
    1. Vendor: IBM
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL 4.0 UPDATE 4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_7600.2.1116
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 10852
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: HS21XM BladeCenter
    1. Vendor: IBM
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL 4.0 UPDATE 4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_7600.2.1116
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5900
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 39472
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 20367
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 10796
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5778
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5778
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Mellanox Cluster Center - Vulcan cluster
    1. Vendor: AMD
    2. CPU Inerconnects: SHM
    3. MPI Library: Scali MPI Connect5.5
    4. Processor: Barcelona 2.0GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 23586
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mellanox Santa Clara CA
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali,Inc.
  1. Computer System: System X3455
    1. Vendor: IBM
    2. CPU Inerconnects: shared memory
    3. MPI Library: Information Not Provided
    4. Processor: AMD Barcelona 1.9 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SLES 10 SP 1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.2.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 27020
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Dallas
  12. Submitted by: Hari Reddy
  13. Submitter Organization: IBM
  1. Computer System: System X3455
    1. Vendor: IBM
    2. CPU Inerconnects: shared memory
    3. MPI Library: Information Not Provided
    4. Processor: AMD Barcelona 1.9 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SLES 10 SP 1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.2.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 42358
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Dallas
  12. Submitted by: Hari Reddy
  13. Submitter Organization: IBM
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: sgi-mpt-1.15-sgi501r
    4. Processor: Intel Itanium 2 1600MHz Montecito
    5. Number of nodes: 1
    6. Processors/Nodes: 32
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2794
  6. RAM per CPU: 2
  7. RAM Bus Speed: 531
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: sgi-mpt-1.15-sgi501r
    4. Processor: Intel Itanium 2 1600MHz Montecito
    5. Number of nodes: 1
    6. Processors/Nodes: 64
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1753
  6. RAM per CPU: 2
  7. RAM Bus Speed: 531
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: sgi-mpt-1.15-sgi501r
    4. Processor: Intel Itanium 2 1600MHz Montecito
    5. Number of nodes: 1
    6. Processors/Nodes: 128
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1260
  6. RAM per CPU: 2
  7. RAM Bus Speed: 531
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 39423
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 10460
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: Intel-MPI 3.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5597
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: Intel-MPI 3.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3066
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx DDR, OFED v1.2
    3. MPI Library: Intel-MPI 3.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2105
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx DDR, OFED v1.2
    3. MPI Library: Intel-MPI 3.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1684
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: S3000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Core 2 Extreme X6800
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RedHat EL 4 Update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9495
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: S3000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Core 2 Extreme X6800
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4848
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: S3000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Core 2 Extreme X6800
    5. Number of nodes: 32
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2782
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2911
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: CE6854
    1. Vendor: ARD
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Dual-core Intel® Core?2 Duo 3.00GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: CentOS 5.0
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 22867
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: CE6854
    1. Vendor: ARD
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Dual-core Intel® Core?2 Duo 3.00GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: CentOS 5.0
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 13137
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: CE6854
    1. Vendor: ARD
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Dual-core Intel® Core?2 Duo 3.00GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: CentOS 5.0
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 7617
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: CE6854
    1. Vendor: ARD
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Dual-core Intel® Core?2 Duo 3.00GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: CentOS 5.0
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 22867
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: CE6854
    1. Vendor: ARD
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Dual-core Intel® Core?2 Duo 3.00GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: CentOS 5.0
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 13137
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Xeon X5365
    1. Vendor: "white box"
    2. CPU Inerconnects: GigE
    3. MPI Library: HP
    4. Processor: Intel(r) Quad Core 3.00Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 30198
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Hillsboro Oregon
  12. Submitted by: Tim Prince
  13. Submitter Organization: Intel SSG
  1. Computer System: Xeon E5472
    1. Vendor: "white box"
    2. CPU Inerconnects: GigE
    3. MPI Library: Intel 3.1.026
    4. Processor: Intel(r) Quad Core 3.00Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 23798
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Hillsboro Oregon
  12. Submitted by: Tim Prince
  13. Submitter Organization: Intel SSG
  1. Computer System: LNXI LS-1
    1. Vendor: Linux Networx Inc., (LNXI)
    2. CPU Inerconnects: Infiniband DDR
    3. MPI Library: ScaliMPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SLES 9.3
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2131
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Bluffdale, Utah
  12. Submitted by: Mike Long
  13. Submitter Organization: LNXI
  1. Computer System: LiquidIQ
    1. Vendor: Liquid Computing
    2. CPU Inerconnects: LiquidIQ fabric
    3. MPI Library: LiquidIQ MPI rel.5.0
    4. Processor: AMD Opteron 8218
    5. Number of nodes: 16
    6. Processors/Nodes: 4
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: RHEL 4.5
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2245
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Ottawa, Ontario, Canada
  12. Submitted by: Ron Van Holst
  13. Submitter Organization: Liquid Computing
  1. Computer System: LiquidIQ
    1. Vendor: Liquid Computing
    2. CPU Inerconnects: LiquidIQ fabric
    3. MPI Library: Liquid MPI rel.5.0
    4. Processor: AMD Opteron 8218
    5. Number of nodes: 8
    6. Processors/Nodes: 4
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: RHEL 4.5
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3600
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Ottawa, Ontario, Canada
  12. Submitted by: Ron Van Holst
  13. Submitter Organization: Liquid Computing
  1. Computer System: LiquidIQ
    1. Vendor: Liquid Computing
    2. CPU Inerconnects: LiquidIQ fabric
    3. MPI Library: LiquidIQ MPI rel.5.0
    4. Processor: AMD Opteron 8218
    5. Number of nodes: 4
    6. Processors/Nodes: 4
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL 4.5
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6329
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Ottawa, Ontario, Canada
  12. Submitted by: Ron Van Holst
  13. Submitter Organization: Liquid Computing
  1. Computer System: LiquidIQ
    1. Vendor: Liquid Computing
    2. CPU Inerconnects: LiquidIQ fabric
    3. MPI Library: LiquidIQ MPI rel.5.0
    4. Processor: AMD Opteron 8218
    5. Number of nodes: 2
    6. Processors/Nodes: 4
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL 4.5
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 11494
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Ottawa, Ontario, Canada
  12. Submitted by: Ron Van Holst
  13. Submitter Organization: Liquid Computing
  1. Computer System: LiquidIQ
    1. Vendor: Liquid Computing
    2. CPU Inerconnects: LiquidIQ fabric
    3. MPI Library: LiquidIQ MPI rel.5.0
    4. Processor: AMD Opteron 8218
    5. Number of nodes: 1
    6. Processors/Nodes: 4
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: RHEL 4.5
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 21913
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Ottawa, Ontario, Canada
  12. Submitted by: Ron Van Holst
  13. Submitter Organization: Liquid Computing
  1. Computer System: LiquidIQ
    1. Vendor: Liquid Computing
    2. CPU Inerconnects: LiquidIQ fabric
    3. MPI Library: Scali MPI Connect5.5
    4. Processor: AMD Opteron 8218
    5. Number of nodes: 20
    6. Processors/Nodes: 4
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 160 (Total CPU)
    9. Operating System: RHEL 4.5
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2013
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Ottawa, Ontario, Canada
  12. Submitted by: Ron Van Holst
  13. Submitter Organization: Liquid Computing
  1. Computer System: LiquidIQ
    1. Vendor: Liquid Computing
    2. CPU Inerconnects: LiquidIQ fabric
    3. MPI Library: Scali MPI Connect5.5
    4. Processor: AMD Opteron 8218
    5. Number of nodes: 16
    6. Processors/Nodes: 4
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: RHEL 4.5
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2323
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Ottawa, Ontario, Canada
  12. Submitted by: Ron Van Holst
  13. Submitter Organization: Liquid Computing
  1. Computer System: LiquidIQ
    1. Vendor: Liquid Computing
    2. CPU Inerconnects: LiquidIQ fabric
    3. MPI Library: Scali MPI Connect5.5
    4. Processor: AMD Opteron 8218
    5. Number of nodes: 8
    6. Processors/Nodes: 4
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: RHEL 4.5
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3650
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Ottawa, Ontario, Canada
  12. Submitted by: Ron Van Holst
  13. Submitter Organization: Liquid Computing
  1. Computer System: LiquidIQ
    1. Vendor: Liquid Computing
    2. CPU Inerconnects: LiquidIQ fabric
    3. MPI Library: Scali MPI Connect5.5
    4. Processor: AMD Opteron 8218
    5. Number of nodes: 4
    6. Processors/Nodes: 4
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL 4.5
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6809
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Ottawa, Ontario, Canada
  12. Submitted by: Ron Van Holst
  13. Submitter Organization: Liquid Computing
  1. Computer System: LiquidIQ
    1. Vendor: Liquid Computing
    2. CPU Inerconnects: LiquidIQ fabric
    3. MPI Library: Scali MPI Connect5.5
    4. Processor: AMD Opteron 8218
    5. Number of nodes: 2
    6. Processors/Nodes: 4
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL 4.5
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 11325
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Ottawa, Ontario, Canada
  12. Submitted by: Ron Van Holst
  13. Submitter Organization: Liquid Computing
  1. Computer System: LiquidIQ
    1. Vendor: Liquid Computing
    2. CPU Inerconnects: LiquidIQ fabric
    3. MPI Library: Scali MPI Connect5.5
    4. Processor: AMD Opteron 8218
    5. Number of nodes: 1
    6. Processors/Nodes: 3
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 6 (Total CPU)
    9. Operating System: RHEL 4.5
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp97.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 21938
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Ottawa, Ontario, Canada
  12. Submitted by: Ron Van Holst
  13. Submitter Organization: Liquid Computing
  1. Computer System: CA9658-4
    1. Vendor: ARD
    2. CPU Inerconnects: Infiniband
    3. MPI Library: HP-MPI
    4. Processor: Intel® Core 2 Extreme QX9650
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Cent OS 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9062
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: SGI MPT 1.15
    4. Processor: Intel Itanium DC Montvale 1.669Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 32
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971R2.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2667
  6. RAM per CPU: 2
  7. RAM Bus Speed: 666
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: SGI MPT 1.15
    4. Processor: Intel Itanium DC Montvale 1.669Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 64
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971R2.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1684
  6. RAM per CPU: 2
  7. RAM Bus Speed: 666
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: SGI MPT 1.17
    4. Processor: Intel Itanium DC Montvale 1.669Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 128
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971R2.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1184
  6. RAM per CPU: 2
  7. RAM Bus Speed: 666
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: SGI MPT 1.17
    4. Processor: Intel Itanium DC Montvale 1.669Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 256
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971R2.7600.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1070
  6. RAM per CPU: 2
  7. RAM Bus Speed: 666
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8200
    1. Vendor: SGI
    2. CPU Inerconnects: Infiniband
    3. MPI Library: MVAPICH-0.9.9
    4. Processor: Intel Xeon Core 2 Duo 3.00GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE Linux Enterpirse Server 10
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_7600.2.1224
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 7233
  6. RAM per CPU: 16
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Warren, MI
  12. Submitted by: Cesar Lucas
  13. Submitter Organization: RDECOM-TARDEC HPC Center
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 21667
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 11292
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5589
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3076
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1729
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 948
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 640
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 128
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 1024 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 568
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300 XE250/XE320
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox InfiniHost III Lx HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon X5272 DC 3.4Ghz, 1600MHz FSB, 800MHz D
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 30937
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300 XE250/XE320
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox InfiniHost III Lx HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon X5272 DC 3.4Ghz, 1600MHz FSB, 800MHz D
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 15227
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300 XE250/XE320
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox InfiniHost III Lx HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon X5272 DC 3.4Ghz, 1600MHz FSB, 800MHz D
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 7933
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300 XE250/XE320
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox InfiniHost III Lx HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon X5272 DC 3.4Ghz, 1600MHz FSB, 800MHz D
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4138
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300 XE250/XE320
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox InfiniHost III Lx HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon X5272 DC 3.4Ghz, 1600MHz FSB, 800MHz D
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2318
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Topspin 270 SDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon X5272 3.40GHz DL160
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 4.6
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 29808
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Scalable Computing & Infrastructure
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Topspin 270 SDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon X5272 3.40GHz DL160
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 4.6
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 16518
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Scalable Computing & Infrastructure
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Topspin 270 SDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon X5272 3.40GHz DL160
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 4.6
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9040
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Scalable Computing & Infrastructure
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Topspin 270 SDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon X5272 3.40GHz DL160
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 4.6
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971R3.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4845
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Scalable Computing & Infrastructure
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: NEXXUS4080ML
    1. Vendor: VXTECH
    2. CPU Inerconnects: Infiniband SDR
    3. MPI Library: OpenMPI 1.2.5 Xeon64
    4. Processor: Xeon 3110 3.00GHz Dual Core
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Linux Red Hat 4 upd 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 8311
  6. RAM per CPU: 8
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: St-Laurent, QC, CANADA
  12. Submitted by: David Giorgi
  13. Submitter Organization: VXTECH
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 22852
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 11975
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6315
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3510
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2167
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2011
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: CA9212i
    1. Vendor: ARD
    2. CPU Inerconnects: DDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 920
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: CentOS 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9415
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Nagoya,Japan
  12. Submitted by: Takuya Ichikawa
  13. Submitter Organization: ARD
  1. Computer System: CA9212i
    1. Vendor: ARD
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Intel i7 920
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Cent OS 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 20411
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Nagoya,Japan
  12. Submitted by: Takuya Ichikawa
  13. Submitter Organization: ARD
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1157
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1864
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3148
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5938
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 11780
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 23555
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel Stoakley server
    1. Vendor: Intel
    2. CPU Inerconnects: bus
    3. MPI Library: MPI 3.2.0.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 22778
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Intel Stoakley server
    1. Vendor: Intel
    2. CPU Inerconnects: bus
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 22778
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG
  1. Computer System: Supermicro Nehalem Server
    1. Vendor: Intel
    2. CPU Inerconnects: QPI
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5570
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 10761
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: QPI
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 11596
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6001
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3154
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1839
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1163
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 128
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 1024 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 441
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 514
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 653
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 989
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1640
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2762
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo ON
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5081
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: PowerEdge M1000e/Windows
    1. Vendor: Dell/Microsoft
    2. CPU Inerconnects: Mellanox InfiniBand ConnectX® DDR Mezz card
    3. MPI Library: Microsoft MPI
    4. Processor: AMD Opteron 2389 2.9GHz
    5. Number of nodes: 6
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: Windows Server 2008 HPC Edition
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP971.s.R421
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4023
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA, USA
  12. Submitted by: Tong Liu
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: PowerEdge M1000e/Windows
    1. Vendor: Dell/Microsoft
    2. CPU Inerconnects: Mellanox InfiniBand ConnectX® DDR Mezz card
    3. MPI Library: Microsoft MPI
    4. Processor: AMD Opteron 2389 2.9GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Windows Server 2008 HPC Edition
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP971.s.R421
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5537
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA, USA
  12. Submitted by: Tong Liu
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: PowerEdge M1000e/Windows
    1. Vendor: Dell/Microsoft
    2. CPU Inerconnects: Mellanox InfiniBand ConnectX® DDR Mezz card
    3. MPI Library: Microsoft MPI
    4. Processor: AMD Opteron 2389 2.9GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Windows Server 2008 HPC Edition
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP971.s.R421
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 10508
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA, USA
  12. Submitted by: Tong Liu
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: PowerEdge M1000e/Windows
    1. Vendor: Dell/Microsoft
    2. CPU Inerconnects: Mellanox InfiniBand ConnectX® DDR Mezz card
    3. MPI Library: Microsoft MPI
    4. Processor: AMD Opteron 2389 2.9GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Windows Server 2008 HPC Edition
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP971.s.R421
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3102
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA, USA
  12. Submitted by: Tong Liu
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: PowerEdge M1000e/Windows
    1. Vendor: Dell/Microsoft
    2. CPU Inerconnects: Mellanox InfiniBand ConnectX® DDR Mezz card
    3. MPI Library: Microsoft MPI
    4. Processor: AMD Opteron 2389 2.9GHz
    5. Number of nodes: 10
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 80 (Total CPU)
    9. Operating System: Windows Server 2008 HPC Edition
  2. Code Version: LS-DYNA
  3. Code Version Number: MPP971.s.R421
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2720
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA, USA
  12. Submitted by: Tong Liu
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: ThinkStation D20 w/ GigaE
    1. Vendor: Lenovo
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Intel MPI 3.2.2
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: CentOS 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9985
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Beijing
  12. Submitted by: Jason Hu
  13. Submitter Organization: Lenovo
  1. Computer System: ThinkStation D20 w/ GigaE
    1. Vendor: Lenovo
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Intel MPI 3.2.2
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: CentOS 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6103
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Beijing
  12. Submitted by: Jason Hu
  13. Submitter Organization: Lenovo
  1. Computer System: ThinkStation D20 w/ GigaE
    1. Vendor: Lenovo
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Intel MPI 3.2.2
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: CentOS 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4298
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Beijing
  12. Submitted by: Jason Hu
  13. Submitter Organization: Lenovo
  1. Computer System: Cisco UCS C460 M1
    1. Vendor: Cisco Systems
    2. CPU Inerconnects: QPI
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon X7560 (2.26 Ghz)
    5. Number of nodes: 1
    6. Processors/Nodes: 4
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Fedora Core 12
  2. Code Version: LS-DYNA
  3. Code Version Number: R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4349
  6. RAM per CPU: 16
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: San Jose
  12. Submitted by: Ven Immani
  13. Submitter Organization: Technical Marketing
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9271
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4608
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2637
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1509
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 962
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Dell PowerEdge M610
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox ConnectX IB QDR
    3. MPI Library: Open MPI
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: CentOS 5.4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR4.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 7403
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge M610
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox ConnectX IB QDR
    3. MPI Library: Open MPI
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: CentOS 5.4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR4.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3809
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge M610
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox ConnectX IB QDR
    3. MPI Library: Open MPI
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: CentOS 5.4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR4.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2290
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge M610
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox ConnectX IB QDR
    3. MPI Library: Open MPI
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: CentOS 5.4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR4.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1371
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge M610
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox ConnectX IB QDR
    3. MPI Library: Open MPI
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 14
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 168 (Total CPU)
    9. Operating System: CentOS 5.4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR4.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1071
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1360
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 810
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 14
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 168 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 905
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 26
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 312 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 658
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix UV10
    1. Vendor: SGI
    2. CPU Inerconnects: QPI
    3. MPI Library: Platform MPI 7.1
    4. Processor: Intel® Xeon® 8 core X7560 2.27GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 4
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack6SP
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3769
  6. RAM per CPU: 8
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2326
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1326
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 144 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 980
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 772
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 531
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 768 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 397
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 85
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 1020 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 387
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2005-TY3
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Quad Core X5687 3.60GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1, SGI® Perfor
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1478
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 549
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2270
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1335
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 793
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 8103
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 3
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 36 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2916
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2244
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1299
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 764
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 519
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 768 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 388
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 768 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 388
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 128
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 1536 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 375
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Ci11T
    1. Vendor: ARD
    2. CPU Inerconnects: QDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 2700K
    5. Number of nodes: 3
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 6 (Total CPU)
    9. Operating System: Cent OS 5.5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R4.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9943
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: NAGOYA,JAPAN
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Ci11T
    1. Vendor: ARD
    2. CPU Inerconnects: QDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 2700K
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Cent OS 5.5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R4.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 7505
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: NAGOYA,JAPAN
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Ci11T
    1. Vendor: ARD
    2. CPU Inerconnects: QDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 2700K
    5. Number of nodes: 6
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: Cent OS 5.5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R4.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5088
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: NAGOYA,JAPAN
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: R620
    1. Vendor: Dell
    2. CPU Inerconnects: IB QDR MT26428
    3. MPI Library: Platform MPI 8.1.2
    4. Processor: E5-2690
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE 11.0 SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971 R6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1852
  6. RAM per CPU: 8
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: usa
  12. Submitted by: Hunter Wang
  13. Submitter Organization: Dell
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies® ConnectX-3® IB FDR
    3. MPI Library: Platform MPI 8.2
    4. Processor: Intel Xeon E5-2680
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_r6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5288
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies® ConnectX-3® IB FDR
    3. MPI Library: Platform MPI 8.2
    4. Processor: Intel Xeon E5-2680
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_r6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2872
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies® ConnectX-3® IB FDR
    3. MPI Library: Platform MPI 8.2
    4. Processor: Intel Xeon E5-2680
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_r6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1572
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies® ConnectX-3® IB FDR
    3. MPI Library: Platform MPI 8.2
    4. Processor: Intel Xeon E5-2680
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_r6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 986
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies® ConnectX-3® IB FDR
    3. MPI Library: Platform MPI 8.2
    4. Processor: Intel Xeon E5-2680
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_r6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 660
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Proliant SL230 G8
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand FDR
    3. MPI Library: Platform MPI 8.1
    4. Processor: Xeon E-2670
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: LS971 R6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 6562
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston, TX
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: High Performance Computing
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4784
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2647
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1456
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1020
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 712
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_r6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5334
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_r6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2853
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_r6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1567
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_r6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 988
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_r6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 658
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_r6.0.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 530
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 628
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 741
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 6
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1189
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 3
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2076
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2638
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 5038
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 994
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1534
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: SGI® Rackable CH-C2112 cluster
    1. Vendor: SGI®
    2. CPU Inerconnects: IB QDR
    3. MPI Library: SGI® MPI 2.07beta
    4. Processor: Intel® Xeon® E5-2670 @2.60GHz Turbo Enabled
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: SLES11 SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 431
  6. RAM per CPU: 8
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: USA
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: HPC Applications Support
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 640 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 390
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 20 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 4320
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 40 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2286
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 80 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1338
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 160 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 781
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 320 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 522
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: SGI® ICE-X
    1. Vendor: SGI®
    2. CPU Inerconnects: IB FDR
    3. MPI Library: SGI® MPI 2.09-p11049
    4. Processor: Intel® Xeon® E5-2690 v2 @3.00GHz Turbo Enabled
    5. Number of nodes: 28
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 560 (Total CPU)
    9. Operating System: SLES11 SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 348
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1866
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: USA
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: HPC Applications Support
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3314
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1727
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 991
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 629
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 416
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: C5932256
    1. Vendor: ARD, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 8.1.1
    4. Processor: Intel Core i7 5960X
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RedHat Linux 6.5 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3259
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Nagoya Japan
  12. Submitted by: ARD
  13. Submitter Organization: CAE Team
  1. Computer System: C5932256
    1. Vendor: ARD, Inc.
    2. CPU Inerconnects: Platform MPI 8.1.1
    3. MPI Library: Mellanox QDR IB
    4. Processor: Intel Core i7 5960X
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RedHat Linux 6.5 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 3793
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Nagoya Japan
  12. Submitted by: ARD
  13. Submitter Organization: CAE Team
  1. Computer System: C5932256
    1. Vendor: ARD, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 8.1.1
    4. Processor: Intel Core i7 5960X
    5. Number of nodes: 3
    6. Processors/Nodes: 1
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: RedHat Linux 6.5 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2716
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Nagoya Japan
  12. Submitted by: ARD
  13. Submitter Organization: CAE Team
  1. Computer System: C5932256
    1. Vendor: ARD, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 8.1.1
    4. Processor: Intel Core i7 5960X
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RedHat Linux 6.5 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 2054
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Nagoya Japan
  12. Submitted by: ARD
  13. Submitter Organization: CAE Team
  1. Computer System: STA-CAL-PERFE5-1
    1. Vendor: FRA-SYS
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MS-MPI V2
    4. Processor: E5-1660V3 - 4.0Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Windows 7 Pro
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 9383
  6. RAM per CPU: 16
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: 59300 Valenciennes - France
  12. Submitted by: Arnaud RINGEVAL
  13. Submitter Organization: CIMES
  1. Computer System: Celsius H730
    1. Vendor: Fujitsu Siemens
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: i7-4610M
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Windows 7 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971s R4.2.1
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 24711
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Germany
  12. Submitted by: Sergio
  13. Submitter Organization: Fujitsu Siemens
  1. Computer System: FusionServer X6800, XH620 V3
    1. Vendor: Huawei
    2. CPU Inerconnects: Mellanox Technologies ConnectX-4 EDR InfiniBand
    3. MPI Library: Mellanox HPC-X v1.6
    4. Processor: Intel(R) Xeon(R) CPU E5-2680 v4@ 2.40GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 14
    8. #Nodes x #Processors per Node #Cores Per Processor = 224 (Total CPU)
    9. Operating System: CentOS Linux release 7.2.1511 (Core)
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R8.1.0
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 564
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Sunnyvale, CA
  12. Submitted by: Pengzhi Zhu
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: 666
    1. Vendor: 666
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: 666
    5. Number of nodes: 666
    6. Processors/Nodes: 666
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 443556 (Total CPU)
    9. Operating System: 666
  2. Code Version: LS-DYNA
  3. Code Version Number: 666
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 666
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: 666
  12. Submitted by: Meme
  13. Submitter Organization: 666
  1. Computer System: 666
    1. Vendor: 666
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: 666
    5. Number of nodes: 666
    6. Processors/Nodes: 666
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 443556 (Total CPU)
    9. Operating System: 666
  2. Code Version: LS-DYNA
  3. Code Version Number: 666
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 666
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: 666
  12. Submitted by: Meme
  13. Submitter Organization: 666
  1. Computer System: EvilComputer
    1. Vendor: Demons
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Heckfire
    5. Number of nodes: 666
    6. Processors/Nodes: 666
    7. Cores Per Processor: 666
    8. #Nodes x #Processors per Node #Cores Per Processor = 295408296 (Total CPU)
    9. Operating System: Brainfuck
  2. Code Version: LS-DYNA
  3. Code Version Number: 666
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 666
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Hell
  12. Submitted by: Satan
  13. Submitter Organization: Demons
  1. Computer System: TjingTjiangTjing
    1. Vendor: me
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: 666
    5. Number of nodes: 666
    6. Processors/Nodes: 666
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 887112 (Total CPU)
    9. Operating System: windows
  2. Code Version: LS-DYNA
  3. Code Version Number: 666
  4. Benchmark problem: 3 Vehicle Collision
  5. Wall clock time: 1
  6. RAM per CPU: 2
  7. RAM Bus Speed: 3
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: here
  12. Submitted by: there
  13. Submitter Organization: me