BENCHMARK DETAILS

  1. Computer System: CRAY XT3
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 1
    3. MPI Library: CRAY XT3 MPI
    4. Processor: AMD Single Core Opteron 2.4 GHZ
    5. Number of nodes: 64
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Catamount
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: car2car
  5. Wall clock time: 40935
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT3
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 1
    3. MPI Library: CRAY XT3 MPI
    4. Processor: AMD Single Core Opteron 2.4 GHZ
    5. Number of nodes: 128
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Catamount
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: car2car
  5. Wall clock time: 22521
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT3
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 1
    3. MPI Library: CRAY XT3 MPI
    4. Processor: AMD Single Core Opteron 2.4 GHZ
    5. Number of nodes: 256
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: Catamount
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: car2car
  5. Wall clock time: 13120
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT3
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 1
    3. MPI Library: CRAY XT3 MPI
    4. Processor: AMD Dual Core Opteron 2.4 GHZ
    5. Number of nodes: 256
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: Catamount
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 6763
  4. Benchmark problem: car2car
  5. Wall clock time: 9100
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 2.0.1
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970.6763
  4. Benchmark problem: car2car
  5. Wall clock time: 31213
  6. RAM per CPU: 2000000000
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Intel
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.6 GHZ
    5. Number of nodes: 64
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Catamount
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: car2car
  5. Wall clock time: 19364
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.6 GHZ
    5. Number of nodes: 128
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: Catamount
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 5434a
  4. Benchmark problem: car2car
  5. Wall clock time: 11803
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.6 GHZ
    5. Number of nodes: 256
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: Catamount
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 6763
  4. Benchmark problem: car2car
  5. Wall clock time: 8643
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.6 GHZ
    5. Number of nodes: 512
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 1024 (Total CPU)
    9. Operating System: Catamount
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp970, 6763
  4. Benchmark problem: car2car
  5. Wall clock time: 6944
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 393884
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 208692
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 107618
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 56722
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 31318
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 20129
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 56
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 224 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 15442
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Power Edge 1950
    1. Vendor: DELL
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MSMPI
    4. Processor: Intel Xeon Dual Core 5160 EM64T
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 114596
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: FRANCE - 59300 Valenciennes
  12. Submitted by: Arnaud RINGEVAL
  13. Submitter Organization: CIMES
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.8 GHZ
    5. Number of nodes: 512
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 1024 (Total CPU)
    9. Operating System: CNL
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 6274
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.8 GHZ
    5. Number of nodes: 256
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: CNL
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 7740
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.8 GHZ
    5. Number of nodes: 128
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: CNL
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 10205
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CRAY XT4
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Seastar 2
    3. MPI Library: CRAY XT4 MPI
    4. Processor: AMD Dual Core Opteron 2.8 GHZ
    5. Number of nodes: 64
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: CNL
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600.398
  4. Benchmark problem: car2car
  5. Wall clock time: 16211
  6. RAM per CPU: 8
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: CRAY Inc.
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: car2car
  5. Wall clock time: 98384
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: car2car
  5. Wall clock time: 190423
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: car2car
  5. Wall clock time: 51974
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: car2car
  5. Wall clock time: 28675
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: S5000XSL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Redhat EL4 update 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: car2car
  5. Wall clock time: 14897
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: Redhat EL4 update 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: car2car
  5. Wall clock time: 8691
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Xeon® Dual Core 5160 EM64T
    5. Number of nodes: 128
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: Redhat EL4 update 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: car2car
  5. Wall clock time: 6343
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: S3000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Core 2 Extreme X6800
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: car2car
  5. Wall clock time: 47850
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: S3000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Core 2 Extreme X6800
    5. Number of nodes: 32
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: car2car
  5. Wall clock time: 26129
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: Xeon E5472
    1. Vendor: "white box"
    2. CPU Inerconnects: GigE
    3. MPI Library: Intel 3.1.026
    4. Processor: Intel(r) Quad Core 3.00Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: car2car
  5. Wall clock time: 217928
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Hillsboro Oregon
  12. Submitted by: Tim Prince
  13. Submitter Organization: Intel SSG
  1. Computer System: S5000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Xeon® Quad Core 2.8Ghz X5365
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat EL4 update 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: car2car
  5. Wall clock time: 55103
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: car2car
  5. Wall clock time: 17738
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: Information Not Provided
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: car2car
  5. Wall clock time: 13117
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: HS21XM BladeCenter
    1. Vendor: IBM
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL 4.0 UPDATE 4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: car2car
  5. Wall clock time: 51330
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: HS21XM BladeCenter
    1. Vendor: IBM
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL 4.0 UPDATE 4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: car2car
  5. Wall clock time: 97357
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: HS21XM BladeCenter
    1. Vendor: IBM
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: RHEL 4.0 UPDATE 4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: car2car
  5. Wall clock time: 189020
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: LNXI LS-1
    1. Vendor: Linux Networx Inc., (LNXI)
    2. CPU Inerconnects: Infiniband DDR
    3. MPI Library: ScaliMPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SLES 9.3
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: car2car
  5. Wall clock time: 16190
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Bluffdale, Utah
  12. Submitted by: Mike Long
  13. Submitter Organization: LNXI
  1. Computer System: LNXI LS-1
    1. Vendor: Linux Networx Inc., (LNXI)
    2. CPU Inerconnects: Infiniband DDR
    3. MPI Library: ScaliMPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SLES 9.3
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: car2car
  5. Wall clock time: 27277
  6. RAM per CPU: 2
  7. RAM Bus Speed: 2
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Bluffdale, Utah
  12. Submitted by: Mike Long
  13. Submitter Organization: Linux Networx Inc., (LNXI)
  1. Computer System: LNXI LS-1
    1. Vendor: Linux Networx Inc., (LNXI)
    2. CPU Inerconnects: Infiniband DDR
    3. MPI Library: ScaliMPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SLES 9.3
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: car2car
  5. Wall clock time: 50410
  6. RAM per CPU: 2
  7. RAM Bus Speed: 2
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Bluffdale, Utah
  12. Submitted by: Mike Long
  13. Submitter Organization: Linux Networx Inc., (LNXI)
  1. Computer System: LNXI LS-1
    1. Vendor: Linux Networx Inc., (LNXI)
    2. CPU Inerconnects: Infiniband DDR
    3. MPI Library: HPMPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SLES 9.3
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: car2car
  5. Wall clock time: 382170
  6. RAM per CPU: 2
  7. RAM Bus Speed: 2
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Bluffdale, Utah
  12. Submitted by: Mike Long
  13. Submitter Organization: Linux Networx Inc., (LNXI)
  1. Computer System: LNXI LS-1
    1. Vendor: Linux Networx Inc., (LNXI)
    2. CPU Inerconnects: Infiniband DDR
    3. MPI Library: ScaliMPI
    4. Processor: 32
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SLES 9.3
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: car2car
  5. Wall clock time: 52635
  6. RAM per CPU: 2
  7. RAM Bus Speed: 677
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Bluffdale, Utah
  12. Submitted by: Mike Long
  13. Submitter Organization: LNXI
  1. Computer System: CA9658-4
    1. Vendor: ARD
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel Core 2 Extreme QX9650 
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Cent OS 5.1+
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: car2car
  5. Wall clock time: 80029
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: CA9212i
    1. Vendor: ARD
    2. CPU Inerconnects: DDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 920
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Cent OS 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: car2car
  5. Wall clock time: 92125
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Nagoya,Japan
  12. Submitted by: Takuya Ichikawa
  13. Submitter Organization: ARD
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 5366
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 8071
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 14321
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 28167
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 51324
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 103008
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 201888
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel Stoakley server
    1. Vendor: Intel
    2. CPU Inerconnects: bus
    3. MPI Library: MPI 3.2.0.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 198759
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Intel Stoakley server
    1. Vendor: Intel
    2. CPU Inerconnects: bus
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 198759
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG
  1. Computer System: Intel Stoakley server
    1. Vendor: Intel
    2. CPU Inerconnects: bus
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 198759
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG
  1. Computer System: Supermicro Nehalem Server
    1. Vendor: Intel
    2. CPU Inerconnects: QPI
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5570
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 104789
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: QPI
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 111730
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 59685
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 31762
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 19252
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 10503
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 6105
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 4346
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 128
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 1024 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: car2car
  5. Wall clock time: 2316
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo ON
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: car2car
  5. Wall clock time: 3130
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo ON
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: car2car
  5. Wall clock time: 4852
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo ON
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: car2car
  5. Wall clock time: 8418
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo ON
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: car2car
  5. Wall clock time: 15720
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo ON
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: car2car
  5. Wall clock time: 27231
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo ON
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: car2car
  5. Wall clock time: 51899
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: CX1
    1. Vendor: Cray Inc.
    2. CPU Inerconnects: IB DDR (mlx4_0/MT26418)
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® X5570 2.93GHz (Turbo ON)
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat EL 5.3 with Platform PCM 1.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 29017
  6. RAM per CPU: 12
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Montreal, Canada
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Cray CX Division
  1. Computer System: CX1
    1. Vendor: Cray Inc.
    2. CPU Inerconnects: IB DDR (mlx4_0/MT26418)
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® X5570 2.93GHz (Turbo ON)
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Redhat EL 5.3 with Platform PCM 1.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 54700
  6. RAM per CPU: 12
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Montreal, Canada
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Cray CX Division
  1. Computer System: CX1
    1. Vendor: Cray Inc.
    2. CPU Inerconnects: IB DDR (mlx4_0/MT26418)
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® X5570 2.93GHz (Turbo ON)
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat EL 5.3 with Platform PCM 1.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 105052
  6. RAM per CPU: 12
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Montreal, Canada
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Cray CX Division
  1. Computer System: CX1
    1. Vendor: Cray Inc.
    2. CPU Inerconnects: IB DDR (mlx4_0/MT26418)
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® X5570 2.93GHz (Turbo ON)
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat EL 5.3 with Platform PCM 1.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 16325
  6. RAM per CPU: 12
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Montreal, Canada
  12. Submitted by: John Benninghoff
  13. Submitter Organization: Cray CX Division
  1. Computer System: CA9212i
    1. Vendor: ARD
    2. CPU Inerconnects: SDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 920
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Cent OS 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: car2car
  5. Wall clock time: 50407
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Nagoya,Japan
  12. Submitted by: Takuya Ichikawa
  13. Submitter Organization: ARD
  1. Computer System: DCS CS23-SH
    1. Vendor: Dell
    2. CPU Inerconnects: QDR Infiniband
    3. MPI Library: MSMPI
    4. Processor: 2.8GHz Intel Xeon E5462 Quad Core
    5. Number of nodes: 4
    6. Processors/Nodes: 8
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Windows HPC 2008
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 56814
  6. RAM per CPU: 16
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: 2111 South Oak Street
  12. Submitted by: Brian Kucic
  13. Submitter Organization: R Systems NA, inc.
  1. Computer System: DCS CS23-SH
    1. Vendor: Dell
    2. CPU Inerconnects: QDR Infiniband
    3. MPI Library: MSMPI
    4. Processor: 2.8GHz Intel Xeon E5462 Quad Core
    5. Number of nodes: 2
    6. Processors/Nodes: 8
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Windows HPC 2008
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 111717
  6. RAM per CPU: 16
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: 2111 South Oak Street, Champaign, Illinois
  12. Submitted by: Brian Kucic
  13. Submitter Organization: R Systems NA, inc.
  1. Computer System: DCS CS23-SH
    1. Vendor: Dell
    2. CPU Inerconnects: QDR Infiniband
    3. MPI Library: MSMPI
    4. Processor: 2.8GHz Intel Xeon E5462 Quad Core
    5. Number of nodes: 8
    6. Processors/Nodes: 8
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Windows HPC 2008
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 30487
  6. RAM per CPU: 16
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: 2111 South Oak Street, Champaign, Illinois
  12. Submitted by: Brian Kucic
  13. Submitter Organization: R Systems NA, inc.
  1. Computer System: DCS CS23-SH
    1. Vendor: Dell
    2. CPU Inerconnects: QDR Infiniband
    3. MPI Library: MSMPI
    4. Processor: 2.8GHz Intel Xeon E5462 Quad Core
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Windows HPC 2008
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 17806
  6. RAM per CPU: 16
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: 2111 South Oak Street, Champaign, Illinois
  12. Submitted by: Brian Kucic
  13. Submitter Organization: R Systems NA, inc.
  1. Computer System: DCS CS23-SH
    1. Vendor: Dell
    2. CPU Inerconnects: QDR Infiniband
    3. MPI Library: MSMPI
    4. Processor: 2.8GHz Intel Xeon E5462 Quad Core
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: Windows HPC 2008
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 12121
  6. RAM per CPU: 16
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: 2111 South Oak Street, Champaign, Illinois
  12. Submitted by: Brian Kucic
  13. Submitter Organization: R Systems NA, inc.
  1. Computer System: DCS CS23-SH
    1. Vendor: Dell
    2. CPU Inerconnects: QDR Infiniband
    3. MPI Library: MSMPI
    4. Processor: 2.8GHz Intel Xeon E5462 Quad Core
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Windows HPC 2008
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 111717
  6. RAM per CPU: 16
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: 2111 South Oak Street, Champaign, Illinois
  12. Submitted by: Brian Kucic
  13. Submitter Organization: R Systems NA, inc.
  1. Computer System: DCS CS23-SH
    1. Vendor: Dell
    2. CPU Inerconnects: QDR Infiniband
    3. MPI Library: MSMPI
    4. Processor: 2.8GHz Intel Xeon E5462 Quad Core
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Windows HPC 2008
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 56814
  6. RAM per CPU: 16
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: 2111 South Oak Street, Champaign, Illinois
  12. Submitted by: Brian Kucic
  13. Submitter Organization: R Systems NA, inc.
  1. Computer System: DCS CS23-SH
    1. Vendor: Dell
    2. CPU Inerconnects: QDR Infiniband
    3. MPI Library: MSMPI
    4. Processor: 2.8GHz Intel Xeon E5462 Quad Core
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Windows HPC 2008
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 30487
  6. RAM per CPU: 16
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: 2111 South Oak Street, Champaign, Illinois
  12. Submitted by: Brian Kucic
  13. Submitter Organization: R Systems NA, inc.
  1. Computer System: ThinkStation D20 w/ GigaE
    1. Vendor: Lenovo
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Intel MPI 3.2.2
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: CentOS 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 100850
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Beijing
  12. Submitted by: Jason Hu
  13. Submitter Organization: Lenovo
  1. Computer System: ThinkStation D20 w/ GigaE
    1. Vendor: Lenovo
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Intel MPI 3.2.2
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: CentOS 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 53527
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Beijing
  12. Submitted by: Jason Hu
  13. Submitter Organization: Lenovo
  1. Computer System: ThinkStation D20 w/ GigaE
    1. Vendor: Lenovo
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Intel MPI 3.2.2
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: CentOS 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 29427
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Beijing
  12. Submitted by: Jason Hu
  13. Submitter Organization: Lenovo
  1. Computer System: Cisco UCS C460 M1
    1. Vendor: Cisco Systems
    2. CPU Inerconnects: QPI
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon X7560
    5. Number of nodes: 1
    6. Processors/Nodes: 4
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Fedora Core 12
  2. Code Version: LS-DYNA
  3. Code Version Number: R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 41727
  6. RAM per CPU: 16
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Information Not Provided
  11. Location: San Jose
  12. Submitted by: Ven Immani
  13. Submitter Organization: Cisco Systems
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 768 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 4009
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 4567
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 7690
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 14781
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 23888
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 45828
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 89500
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: bullx blade cluster
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: hpmpi
    4. Processor: Intel® Xeon® Quad Core X5560 @2.80GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: bullx cluster suite
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 113803
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade cluster
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: hpmpi
    4. Processor: Intel® Xeon® Quad Core X5560 @2.80GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: bullx cluster suite
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 58283
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade cluster
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: hpmpi
    4. Processor: Intel® Xeon® Quad Core X5560 @2.80GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: bullx cluster suite
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 30941
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade cluster
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: hpmpi
    4. Processor: Intel® Xeon® Quad Core X5560 @2.80GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: bullx cluster suite
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 17040
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade cluster
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: hpmpi
    4. Processor: Intel® Xeon® Quad Core X5560 @2.80GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: bullx cluster suite
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 9991
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: System x® iDataPlex? dx360 M2
    1. Vendor: IBM
    2. CPU Inerconnects: ConnectX Infiniband
    3. MPI Library: Microsoft MPI
    4. Processor: Intel® Xeon® Quad Core X5550
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Windows HPC Server 2008 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 120657
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond. WA
  12. Submitted by: Daniel Ghidali
  13. Submitter Organization: IBM/Microsoft
  1. Computer System: System x® iDataPlex? dx360 M2
    1. Vendor: IBM
    2. CPU Inerconnects: ConnectX Infiniband
    3. MPI Library: Microsoft MPI
    4. Processor: Intel® Xeon® Quad Core X5550
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Windows HPC Server 2008 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 62435
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond. WA
  12. Submitted by: Daniel Ghidali
  13. Submitter Organization: IBM/Microsoft
  1. Computer System: System x® iDataPlex? dx360 M2
    1. Vendor: IBM
    2. CPU Inerconnects: ConnectX Infiniband
    3. MPI Library: Microsoft MPI
    4. Processor: Intel® Xeon® Quad Core X5550
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Windows HPC Server 2008 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 33782
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond. WA
  12. Submitted by: Daniel Ghidali
  13. Submitter Organization: IBM/Microsoft
  1. Computer System: System x® iDataPlex? dx360 M2
    1. Vendor: IBM
    2. CPU Inerconnects: ConnectX Infiniband
    3. MPI Library: Microsoft MPI
    4. Processor: Intel® Xeon® Quad Core X5550
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Windows HPC Server 2008 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 18790
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond. WA
  12. Submitted by: Daniel Ghidali
  13. Submitter Organization: IBM/Microsoft
  1. Computer System: System x® iDataPlex? dx360 M2
    1. Vendor: IBM
    2. CPU Inerconnects: ConnectX Infiniband
    3. MPI Library: Microsoft MPI
    4. Processor: Intel® Xeon® Quad Core X5550
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Windows HPC Server 2008 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 10696
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond. WA
  12. Submitted by: Daniel Ghidali
  13. Submitter Organization: IBM/Microsoft
  1. Computer System: System x® iDataPlex? dx360 M2
    1. Vendor: IBM
    2. CPU Inerconnects: ConnectX Infiniband
    3. MPI Library: Microsoft MPI
    4. Processor: Intel® Xeon® Quad Core X5550
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: Windows HPC Server 2008 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 5989
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond. WA
  12. Submitted by: Daniel Ghidali
  13. Submitter Organization: IBM/Microsoft
  1. Computer System: System x® iDataPlex? dx360 M2
    1. Vendor: IBM
    2. CPU Inerconnects: ConnectX Infiniband
    3. MPI Library: Microsoft MPI
    4. Processor: Intel® Xeon® Quad Core X5550
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: Windows HPC Server 2008 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 4196
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond. WA
  12. Submitted by: Daniel Ghidali
  13. Submitter Organization: IBM/Microsoft
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2
  4. Benchmark problem: car2car
  5. Wall clock time: 4005
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 30
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 360 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2
  4. Benchmark problem: car2car
  5. Wall clock time: 4113
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: PRIMERGY BX900
    1. Vendor: Fujitsu
    2. CPU Inerconnects: QDR IB
    3. MPI Library: HP-MPI
    4. Processor: Intel® Xeon® X5677
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: Red Hat Enterprise Linux 5.5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 11958
  6. RAM per CPU: 12
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Tsukuba, Japan
  12. Submitted by: Hisashi Kogawa
  13. Submitter Organization: Japan Automobile Research Institute
  1. Computer System: NEC LX system
    1. Vendor: NEC
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Platform MPI 8.0.0
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: RedHat5U4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 82363
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Tokyo
  12. Submitted by: Gregg Skinner
  13. Submitter Organization: NEC
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 21
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 252 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 4924
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 768 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 2464
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 85
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 1020 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 2039
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 128
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 1536 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 1936
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: HP Proliant® BL2X220c G6
    1. Vendor: HP
    2. CPU Inerconnects: HP BLc 4X QDR Switch 1 Mellanox
    3. MPI Library: MS-MPI
    4. Processor: Intel® Xeon® Six Core X5650 2.66 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: Windows HPC Server 2008 R2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 99591
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond, WA
  12. Submitted by: Hari Reddy
  13. Submitter Organization: Microsoft
  1. Computer System: HP Proliant® BL2X220c G6
    1. Vendor: HP
    2. CPU Inerconnects: HP BLc 4X QDR Switch 1 Mellanox
    3. MPI Library: MS-MPI
    4. Processor: Intel® Xeon® Six Core X5650 2.66 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: Windows HPC Server 2008 R2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 50140
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond, WA
  12. Submitted by: Hari reddy
  13. Submitter Organization: Microsoft
  1. Computer System: HP Proliant® BL2X220c G6
    1. Vendor: HP
    2. CPU Inerconnects: HP BLc 4X QDR Switch 1 Mellanox
    3. MPI Library: MS-MPI
    4. Processor: Intel® Xeon® Six Core X5650 2.66 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: Windows HPC Server 2008 R2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 27543
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond, WA
  12. Submitted by: Hari reddy
  13. Submitter Organization: Microsoft
  1. Computer System: HP Proliant® BL2X220c G6
    1. Vendor: HP
    2. CPU Inerconnects: HP BLc 4X QDR Switch 1 Mellanox
    3. MPI Library: MS-MPI
    4. Processor: Intel® Xeon® Six Core X5650 2.66 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: Windows HPC Server 2008 R2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 16165
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond, WA
  12. Submitted by: Hari reddy
  13. Submitter Organization: Microsoft
  1. Computer System: HP Proliant® BL2X220c G6
    1. Vendor: HP
    2. CPU Inerconnects: HP BLc 4X QDR Switch 1 Mellanox
    3. MPI Library: MS-MPI
    4. Processor: Intel® Xeon® Six Core X5650 2.66 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: Windows HPC Server 2008 R2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 8516
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond, WA
  12. Submitted by: Hari reddy
  13. Submitter Organization: Microsoft
  1. Computer System: HP Proliant® BL2X220c G6
    1. Vendor: HP
    2. CPU Inerconnects: HP BLc 4X QDR Switch 1 Mellanox
    3. MPI Library: MS-MPI
    4. Processor: Intel® Xeon® Six Core X5650 2.66 GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: Windows HPC Server 2008 R2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 5247
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Redmond, WA
  12. Submitted by: Hari reddy
  13. Submitter Organization: Microsoft
  1. Computer System: Rackable? C2005-TY3
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Quad Core X5687 3.60GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1, SGI® Perfor
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 13664
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 13112
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 6537
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 3814
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 128
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 1536 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 1769
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 768 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 2361
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 3646
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 6091
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 12615
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 22043
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Ci11T
    1. Vendor: ARD
    2. CPU Inerconnects: QDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 2700K
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Cent OS 5.5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R4.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 68543
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: NAGOYA,JAPAN
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Ci11T
    1. Vendor: ARD
    2. CPU Inerconnects: QDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 2700K
    5. Number of nodes: 6
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: Cent OS 5.5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R4.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 45155
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: NAGOYA,JAPAN
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 4843
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 5711
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 7545
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 6
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 10077
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 14991
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 3
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 18006
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 26100
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 50969
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 25227
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 13981
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 7442
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 4294
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 2823
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Platform MPI 8.2.1
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 1024 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: car2car
  5. Wall clock time: 2072
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: SGI® Rackable CH-C2112 cluster
    1. Vendor: SGI®
    2. CPU Inerconnects: IB QDR
    3. MPI Library: SGI® MPI 2.07beta
    4. Processor: Intel® Xeon® E5-2670 @2.60GHz Turbo Enabled
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 1024 (Total CPU)
    9. Operating System: SLES11 SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 1887
  6. RAM per CPU: 8
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: USA
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: HPC Applications Support
  1. Computer System: SGI® ICE-X
    1. Vendor: SGI®
    2. CPU Inerconnects: IB FDR
    3. MPI Library: SGI® MPI 2.09-p11049
    4. Processor: Intel® Xeon® E5-2690 v2 @3.00GHz Turbo Enabled
    5. Number of nodes: 100
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 2000 (Total CPU)
    9. Operating System: SLES11 SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: car2car
  5. Wall clock time: 1207
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1866
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: USA
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: HPC Applications Support
  1. Computer System: CRAY XC30
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Aries Interconnect
    3. MPI Library: CRAY MPI 6.2.0
    4. Processor: Intel Xeon E5-2690 v2 3.0 GHZ
    5. Number of nodes: 150
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 3000 (Total CPU)
    9. Operating System: CRAY CLE 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R6.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 931
  6. RAM per CPU: 32
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: St Paul, US
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XC30
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Aries Interconnect
    3. MPI Library: CRAY MPI 6.2.0
    4. Processor: Intel Xeon E5-2690 v2 3.0 GHZ
    5. Number of nodes: 100
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 2000 (Total CPU)
    9. Operating System: CRAY CLE 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R6.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 1112
  6. RAM per CPU: 32
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: St Paul, US
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XC30
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Aries Interconnect
    3. MPI Library: CRAY MPI 6.2.0
    4. Processor: Intel Xeon E5-2690 v2 3.0 GHZ
    5. Number of nodes: 50
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 1000 (Total CPU)
    9. Operating System: CRAY CLE 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R6.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 1680
  6. RAM per CPU: 32
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: USA
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: CRAY XC30
    1. Vendor: CRAY Inc.
    2. CPU Inerconnects: Aries Interconnect
    3. MPI Library: CRAY MPI 6.2.0
    4. Processor: Intel Xeon E5-2690 v2 3.0 GHZ
    5. Number of nodes: 75
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 1500 (Total CPU)
    9. Operating System: CRAY CLE 5.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R6.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 1315
  6. RAM per CPU: 32
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: USA
  12. Submitted by: Ting-Ting Zhu
  13. Submitter Organization: Cray Inc.
  1. Computer System: bullx blade system (B520)
    1. Vendor: BULL
    2. CPU Inerconnects: Infiniband FDR
    3. MPI Library: Intel MPI 4.1.3.049
    4. Processor: Intel® Xeon® E5-2690v3 @2.60GHz Turbo Enabled
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R4.0
  2. Code Version: LS-DYNA
  3. Code Version Number: hybR7.1.1
  4. Benchmark problem: car2car
  5. Wall clock time: 3207
  6. RAM per CPU: 128
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Rafael Escovar
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B520)
    1. Vendor: BULL
    2. CPU Inerconnects: Infiniband FDR
    3. MPI Library: Intel MPI 4.1.3.049
    4. Processor: Intel® Xeon® E5-2690v3 @2.60GHz Turbo Enabled
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 768 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R4.0
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.1
  4. Benchmark problem: car2car
  5. Wall clock time: 2076
  6. RAM per CPU: 5
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Rafael Escovar
  13. Submitter Organization: BULL
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: car2car
  5. Wall clock time: 30744
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: car2car
  5. Wall clock time: 16132
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: car2car
  5. Wall clock time: 9196
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: car2car
  5. Wall clock time: 4589
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: car2car
  5. Wall clock time: 2807
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: HP ProLiant BL460c
    1. Vendor: HP
    2. CPU Inerconnects: Shared memory
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2697 v3 2.6 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 14
    8. #Nodes x #Processors per Node #Cores Per Processor = 28 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 27980
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: HP ProLiant BL460c
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2697 v3 2.6 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 14
    8. #Nodes x #Processors per Node #Cores Per Processor = 56 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 14932
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Information Not Provided
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: HP ProLiant BL460c
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2697 v3 2.6 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 14
    8. #Nodes x #Processors per Node #Cores Per Processor = 112 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 8068
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: HP ProLiant BL460c
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2697 v3 2.6 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 14
    8. #Nodes x #Processors per Node #Cores Per Processor = 224 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 4387
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: HP ProLiant BL460c
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2697 v3 2.6 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 14
    8. #Nodes x #Processors per Node #Cores Per Processor = 448 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 2750
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: HP ProLiant XL230a
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2698 v3 @2.3 GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 16
    8. #Nodes x #Processors per Node #Cores Per Processor = 1024 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 1877
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: HP ProLiant XL230a
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2698 v3 @2.3 GHz
    5. Number of nodes: 48
    6. Processors/Nodes: 2
    7. Cores Per Processor: 16
    8. #Nodes x #Processors per Node #Cores Per Processor = 1536 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 1473
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: HP ProLiant XL230a
    1. Vendor: HP
    2. CPU Inerconnects: Shared memory
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 v3 @2.6 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 30322
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: HP ProLiant XL230a
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 v3 @2.6 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 16161
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: HP ProLiant XL230a
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 v3 @2.6 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 8910
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: HP ProLiant XL230a
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 v3 @2.6 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 4392
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: HP ProLiant BL460c
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2697 v3 2.6 GHz
    5. Number of nodes: 96
    6. Processors/Nodes: 2
    7. Cores Per Processor: 14
    8. #Nodes x #Processors per Node #Cores Per Processor = 2688 (Total CPU)
    9. Operating System: RHEL
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.2
  4. Benchmark problem: car2car
  5. Wall clock time: 1098
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Houston
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: Hewlett-Packard Company
  1. Computer System: STA-CAL-PERFE5-1
    1. Vendor: FRA-SYS
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MS-MPI V2
    4. Processor: E5-1660-V3-4.0Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Windows 7 Pro
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.1
  4. Benchmark problem: car2car
  5. Wall clock time: 99782
  6. RAM per CPU: 16
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: 59300 Valenciennes - France
  12. Submitted by: Arnaud RINGEVAL
  13. Submitter Organization: CIMES
  1. Computer System: ARD C51601T72IR21-25
    1. Vendor: ARD
    2. CPU Inerconnects: QDR Infini
    3. MPI Library: platformMPI_9.1.0
    4. Processor: i7-5960X
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: CentOS 6.8
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.3
  4. Benchmark problem: car2car
  5. Wall clock time: 9859
  6. RAM per CPU: 64
  7. RAM Bus Speed: 2666
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Nagoya Japan
  12. Submitted by: ARD CAE Team
  13. Submitter Organization: ARD Corporation