BENCHMARK DETAILS

  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 3271
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Technical Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1714
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 953
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 596
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Red Hat EL 4.3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 457
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: NEXXUS 4080PT
    1. Vendor: Ciara Technologies/VXTECH
    2. CPU Inerconnects: InfiniBand SDR
    3. MPI Library: HP-MPI
    4. Processor: Dual-core Intel® Core?2 Duo 2.66GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 822
  6. RAM per CPU: 2048
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Burex Kojimachi 8F 3-5-2 Kojimachi, Chiyoda-ku, To
  12. Submitted by: Takahiko Tomuro
  13. Submitter Organization: Scalable Systems., Co. Ltd.
  1. Computer System: CP3000/Windows CCS
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 4492
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Windows CCS
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 3014
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Windows CCS
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1539
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Windows CCS
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 833
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Windows CCS
    1. Vendor: HP
    2. CPU Inerconnects: Voltaire InfiniBand DDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz DL140
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Microsoft Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 609
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 153
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 208
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 223
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 293
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 388
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 482
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 700
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 879
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1290
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1714
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 3242
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: Cambridge Cluster
    1. Vendor: ClusterVision/Dell
    2. CPU Inerconnects: QLogic InfiniPath IB
    3. MPI Library: QLogic MPI 2.0
    4. Processor: Intel Dualcore Xeon 5160 3.0 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: ClusterVisionOS
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 4863
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Cambridge, UK
  12. Submitted by: Kevin Ball
  13. Submitter Organization: QLogic
  1. Computer System: LS-P
    1. Vendor: Linux Networx
    2. CPU Inerconnects: Infiniband SDR
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon 5160 3.0GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: SLES9
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 4212
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: LS-P
    1. Vendor: Linux Networx
    2. CPU Inerconnects: Infiniband SDR
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon 5160 3.0GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SLES9
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 2172
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: LS-P
    1. Vendor: Linux Networx
    2. CPU Inerconnects: Infiniband SDR
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon 5160 3.0GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SLES9
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1127
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: LS-P
    1. Vendor: Linux Networx
    2. CPU Inerconnects: Infiniband SDR
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon 5160 3.0GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SLES9
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 608
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: LS-P
    1. Vendor: Linux Networx
    2. CPU Inerconnects: Infiniband SDR
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon 5160 3.0GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SLES9
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 2885
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: LS-P
    1. Vendor: Linux Networx
    2. CPU Inerconnects: Infiniband SDR
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon 5160 3.0GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SLES9
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1499
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: LS-P
    1. Vendor: Linux Networx
    2. CPU Inerconnects: Infiniband SDR
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon 5160 3.0GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SLES9
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 811
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: LS-P
    1. Vendor: Linux Networx
    2. CPU Inerconnects: Infiniband SDR
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon 5160 3.0GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SLES9
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 551
  6. RAM per CPU: 4
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: Workstation Celsius V830
    1. Vendor: Fujitsu Siemens
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Opteron 250 2400MHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Win Xp 64
  2. Code Version: LS-DYNA
  3. Code Version Number: ls970.6763.342
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 7918
  6. RAM per CPU: 3
  7. RAM Bus Speed: 200
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: Italy
  12. Submitted by: Rosario Dotoli
  13. Submitter Organization: CETMA Consortium
  1. Computer System: Workst.Celsius V830 CETMA
    1. Vendor: Fujitsu Siemens
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Opteron 250 2400MHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Win Xp 64
  2. Code Version: LS-DYNA
  3. Code Version Number: 970.7600.131
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 7734
  6. RAM per CPU: 3
  7. RAM Bus Speed: 200
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: Lecce,Italy
  12. Submitted by: Rosario Dotoli
  13. Submitter Organization: Consorzio CETMA
  1. Computer System: Celsius M430 Pent.4 CETMA
    1. Vendor: Fujitsu Siemens
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Pentium4 530, 3GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Win Xp
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 6763.169
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 29720
  6. RAM per CPU: 2
  7. RAM Bus Speed: 266
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: Lecce, Italy
  12. Submitted by: Rosario Dotoli
  13. Submitter Organization: CETMA Consortium
  1. Computer System: Workst.Celsius V830 CETMA
    1. Vendor: Fujitsu Siemens
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Opteron 250 2400MHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: Win Xp 64
  2. Code Version: LS-DYNA
  3. Code Version Number: 971 7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 15551
  6. RAM per CPU: 3
  7. RAM Bus Speed: 200
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: Lecce, Italy
  12. Submitted by: Rosario Dotoli
  13. Submitter Organization: CETMA Consortium
  1. Computer System: Celsius M430 Pent.4 CETMA
    1. Vendor: Fujitsu Siemens
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Pentium4 530, 3GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: win Xp
  2. Code Version: LS-DYNA
  3. Code Version Number: 970 6763
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 32629
  6. RAM per CPU: 2
  7. RAM Bus Speed: 266
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: Information Not Provided
  10. System Dedicated/Shared: Dedicated
  11. Location: Lecce, Italy
  12. Submitted by: Rosario Dotoli
  13. Submitter Organization: CETMA Consortium
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand OFED
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon 5160 3.0 GHz BL460c
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 557
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand OFED
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon 5160 3.0 GHz BL460c
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 823
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand OFED
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon 5160 3.0 GHz BL460c
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1536
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand OFED
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon 5160 3.0 GHz BL460c
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 3019
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000/Linux
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand OFED
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon 5160 3.0 GHz BL460c
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 4431
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Clovertown Blades
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband DDR, OFED 1.2
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon Clovertown 2.66GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 3359
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Mellanox Technolgies, Inc./Scali, Inc.
  1. Computer System: Clovertown Blades
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband DDR, OFED 1.2
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon Clovertown 2.66GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1709
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Mellanox Technolgies, Inc./Scali, Inc.
  1. Computer System: Clovertown Blades
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband DDR, OFED 1.2
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon Clovertown 2.66GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 918
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Mellanox Technolgies, Inc./Scali, Inc.
  1. Computer System: Clovertown Blades
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband DDR, OFED 1.2
    3. MPI Library: MPI Connect 5.4
    4. Processor: Intel Xeon Clovertown 2.66GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 529
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Oslo, Norway
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Mellanox Technolgies, Inc./Scali, Inc.
  1. Computer System: CA160ⅡT
    1. Vendor: ARD
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Dual-core Intel® Core?2 Extreme 2.93GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Windows
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 4265
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: sgi-mpt-1.15-sgi501r
    4. Processor: Intel Itanium 2 1600MHz
    5. Number of nodes: 1
    6. Processors/Nodes: 32
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 333
  6. RAM per CPU: 2
  7. RAM Bus Speed: 531
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: SGI Applications Engineering
  1. Computer System: CA160ⅡT
    1. Vendor: ARD
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Information Not Provided
    4. Processor: Dual-core Intel® Core?2 Extreme 2.93GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: Windows
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 6509
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Japan-Nagoya
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: sgi-mpt-1.15-sgi501r
    4. Processor: Intel Itanium 2 1600MHz
    5. Number of nodes: 1
    6. Processors/Nodes: 64
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 279
  6. RAM per CPU: 2
  7. RAM Bus Speed: 531
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 404
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 449
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 805
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1492
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 2911
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 8081
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx DDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 359
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 336
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx DDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.4.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 327
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 4482
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 2995
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1528
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 815
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 473
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: ConnectX
    3. MPI Library: HP-MPI
    4. Processor: Intel Dualcore Xeon 3.0 GHz BL460c
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Red Hat EL 4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1116
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 337
  6. RAM per CPU: 2
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: High Performance Computing Division
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: Mellanox Cluster Center - Vulcan cluster
    1. Vendor: AMD
    2. CPU Inerconnects: SHM
    3. MPI Library: Scali MPI Connect5.5
    4. Processor: Barcelona 2.0GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1817
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mellanox Santa Clara CA
  12. Submitted by: Hakon Bugge, Scali
  13. Submitter Organization: Scali, Inc.
  1. Computer System: Mellanox Cluster Center - Vulcan cluster
    1. Vendor: AMD
    2. CPU Inerconnects: Mellanox InfiniBand ConnectX DDR, OFED 1.2
    3. MPI Library: Scali MPI Connect5.5
    4. Processor: Barcelona 2.0GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1001
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mellanox Santa Clara CA
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: Mellanox Cluster Center - Vulcan cluster
    1. Vendor: AMD
    2. CPU Inerconnects: Mellanox InfiniBand ConnectX DDR, OFED 1.2
    3. MPI Library: Scali MPI Connect5.5
    4. Processor: Barcelona 2.0GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 494
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mellanox Santa Clara CA
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: Mellanox Cluster Center - Vulcan cluster
    1. Vendor: AMD
    2. CPU Inerconnects: Mellanox InfiniBand ConnectX DDR, OFED 1.2
    3. MPI Library: Scali MPI Connect5.5
    4. Processor: Barcelona 2.0GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: RHEL4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 377
  6. RAM per CPU: 1
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mellanox Santa Clara CA
  12. Submitted by: Hakon Bugge
  13. Submitter Organization: Scali, Inc.
  1. Computer System: X 3455
    1. Vendor: IBM
    2. CPU Inerconnects: shared memory
    3. MPI Library: Information Not Provided
    4. Processor: AMD Barcelona 1.9 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SLES 10 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.2.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1940
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dallas
  12. Submitted by: Hari Reddy
  13. Submitter Organization: IBM
  1. Computer System: System X 3455
    1. Vendor: IBM
    2. CPU Inerconnects: shared memory
    3. MPI Library: Information Not Provided
    4. Processor: AMD Barcelona 1.9 GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SLES 10 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.2.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 3167
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dallas
  12. Submitted by: Hari Reddy
  13. Submitter Organization: IBM
  1. Computer System: HS21XM BladeCenter
    1. Vendor: IBM
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL 4.0 UPDATE 4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_7600.2.1116
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 497
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: HS21XM BladeCenter
    1. Vendor: IBM
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL 4.0 UPDATE 4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_7600.2.1116
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 827
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: HS21XM BladeCenter
    1. Vendor: IBM
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: RHEL 4.0 UPDATE 4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_7600.2.1116
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1542
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: HS21XM BladeCenter
    1. Vendor: IBM
    2. CPU Inerconnects: InfiniBand
    3. MPI Library: Intel MPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: RHEL 4.0 UPDATE 4
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_7600.2.1116
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 3003
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Poughkeepsie
  12. Submitted by: Guangye Li
  13. Submitter Organization: IBM
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: sgi-mpt-1.15-sgi501r
    4. Processor: Intel Itanium 2 1600MHz Montecito
    5. Number of nodes: 1
    6. Processors/Nodes: 32
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 307
  6. RAM per CPU: 2
  7. RAM Bus Speed: 531
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: sgi-mpt-1.15-sgi501r
    4. Processor: Intel Itanium 2 1600MHz Montecito
    5. Number of nodes: 1
    6. Processors/Nodes: 128
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 174
  6. RAM per CPU: 2
  7. RAM Bus Speed: 531
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx DDR, OFED v1.2
    3. MPI Library: Intel-MPI 3.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 349
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx DDR, OFED v1.2
    3. MPI Library: Intel-MPI 3.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 275
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx DDR, OFED v1.2
    3. MPI Library: Intel-MPI 3.1
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 267
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: sgi-mpt-1.15-sgi501r
    4. Processor: Intel Itanium 2 1600MHz Montecito
    5. Number of nodes: 1
    6. Processors/Nodes: 64
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 217
  6. RAM per CPU: 2
  7. RAM Bus Speed: 531
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: S3000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Core 2 Extreme X6800
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1442
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: S3000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Core 2 Extreme X6800
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1442
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: S3000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Core 2 Extreme X6800
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 771
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: S3000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Core 2 Extreme X6800
    5. Number of nodes: 16
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 439
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: S3000PAL
    1. Vendor: Intel
    2. CPU Inerconnects: Infiniband
    3. MPI Library: Intel MPI 3.0.043
    4. Processor: Intel® Core 2 Extreme X6800
    5. Number of nodes: 32
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat EL4 update 2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 288
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel
  1. Computer System: Altix 1200
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire HCA 410Ex InfiniHost III Lx SDR, OFED v1.2
    3. MPI Library: ScaliMPIConnect5.5.0
    4. Processor: Intel 5160 Woodcrest DC 3.0GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10 (x86_64)
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 310
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: HPC6000-XC832C
    1. Vendor: HPC Systems Inc.
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: MSMPI
    4. Processor: Dual-core Intel® Core?2 Duo 2.66GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Windows Compute Cluster Server 2003
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.398
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1167
  6. RAM per CPU: 8192
  7. RAM Bus Speed: 533
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: TIME24Building 10FNorth 2-45 Aomi, Kotou-ku, Tokyo
  12. Submitted by: Morihisa Uchida
  13. Submitter Organization: HPC Systems Inc.
  1. Computer System: Xeon X5365
    1. Vendor: "white box"
    2. CPU Inerconnects: GigE
    3. MPI Library: HP
    4. Processor: Intel(r) Quad Core 3.00Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 2214
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Hillsboro Oregon
  12. Submitted by: Tim Prince
  13. Submitter Organization: Intel SSG
  1. Computer System: Xeon E5472
    1. Vendor: "white box"
    2. CPU Inerconnects: GigE
    3. MPI Library: Intel 3.1.026
    4. Processor: Intel(r) Quad Core 3.00Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL4.4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1711
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Hillsboro Oregon
  12. Submitted by: Tim Prince
  13. Submitter Organization: Intel SSG
  1. Computer System: LNXI LS-1
    1. Vendor: Linux Networx Inc., (LNXI)
    2. CPU Inerconnects: Infiniband DDR
    3. MPI Library: ScaliMPI
    4. Processor: Intel Xeon 5160
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SLES 9.3
  2. Code Version: LS-DYNA
  3. Code Version Number: Version ls971.7
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 230
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Bluffdale, Utah
  12. Submitted by: Mike Long
  13. Submitter Organization: LNXI
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: SGI MPT 1.15
    4. Processor: Intel Itanium DC Montvale 1.669Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 32
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971R2.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 296
  6. RAM per CPU: 2
  7. RAM Bus Speed: 666
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: SGI MPT 1.15
    4. Processor: Intel Itanium DC Montvale 1.669Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 64
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971R2.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 205
  6. RAM per CPU: 2
  7. RAM Bus Speed: 666
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix 4700
    1. Vendor: SGI
    2. CPU Inerconnects: NUMAlink
    3. MPI Library: SGI MPT 1.17
    4. Processor: Intel Itanium DC Montvale 1.669Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 128
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP3
  2. Code Version: LS-DYNA
  3. Code Version Number: 971R2.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 166
  6. RAM per CPU: 2
  7. RAM Bus Speed: 666
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Lattice
    1. Vendor: Self built
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: HP-MPI 2.02.05.01
    4. Processor: Intel Core 2 duo E6850 3.0GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Centos Linux 5.0
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1039
  6. RAM per CPU: 3072
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Materials Engineering Laboratory, University of Ou
  12. Submitted by: David Martin
  13. Submitter Organization: University of Oulu
  1. Computer System: Altix XE1200/Windows CCS
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire Infiniband DDR
    3. MPI Library: MSMPI 1.0.0676.0
    4. Processor: Intel Xeon 5160 DC 3.0GHz (Woodcrest)
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Windows CCS (Server 2003 Std x64 SP2 v5.2.3790)
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1647
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Joseph Michaud
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1200/Windows CCS
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire Infiniband DDR
    3. MPI Library: MSMPI 1.0.0676.0
    4. Processor: Intel Xeon 5160 DC 3.0GHz (Woodcrest)
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Windows CCS (Server 2003 Std x64 SP2 v5.2.3790)
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1001
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Joseph Michaud
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1200/Windows CCS
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire Infiniband DDR
    3. MPI Library: MSMPI 1.0.0676.0
    4. Processor: Intel Xeon 5160 DC 3.0GHz (Woodcrest)
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Windows CCS (Server 2003 Std x64 SP2 v5.2.3790)
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 3070
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Joseph Michaud
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1200/Windows CCS
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire Infiniband DDR
    3. MPI Library: MSMPI 1.0.0676.0
    4. Processor: Intel Xeon 5160 DC 3.0GHz (Woodcrest)
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 2 (Total CPU)
    9. Operating System: Windows CCS (Server 2003 Std x64 SP2 v5.2.3790)
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 4757
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Joseph Michaud
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1200/Windows CCS
    1. Vendor: SGI
    2. CPU Inerconnects: Voltaire Infiniband DDR
    3. MPI Library: MSMPI 1.0.0676.0
    4. Processor: Intel Xeon 5160 DC 3.0GHz (Woodcrest)
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 1
    8. #Nodes x #Processors per Node #Cores Per Processor = 1 (Total CPU)
    9. Operating System: Windows CCS (Server 2003 Std x64 SP2 v5.2.3790)
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.7600.1224
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 8729
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Joseph Michaud
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1462
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 807
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 433
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 230
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 121
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox ConnectX IB HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon E5472 3.00GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP4
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 90
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: XW 9400 CETMA
    1. Vendor: Hp
    2. CPU Inerconnects: 1GHz
    3. MPI Library: Information Not Provided
    4. Processor: AMD Opteron 2216 dualcore 2,4Ghz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Win Xp 64
  2. Code Version: LS-DYNA
  3. Code Version Number: 971 7600.131
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 6125
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Dedicated
  11. Location: Lecce, Italy
  12. Submitted by: Rosario Dotoli
  13. Submitter Organization: CETMA Consortium
  1. Computer System: 4 Opteron DualCore CETMA
    1. Vendor: Workstation E8046
    2. CPU Inerconnects: Socket F
    3. MPI Library: no
    4. Processor: AMD Opteron DualCore 8218 2,6GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 4
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: win 64
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971s R2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 4560
  6. RAM per CPU: 2
  7. RAM Bus Speed: 667
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Lecce - Italy
  12. Submitted by: Rosario Dotoli
  13. Submitter Organization: CETMA Consortium
  1. Computer System: BoxClusterML-DYNA
    1. Vendor: BoxCluster (HPC Systems Inc.)
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon 3110 3.0GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: CentOS 4.6
  2. Code Version: LS-DYNA
  3. Code Version Number: 971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1410
  6. RAM per CPU: 8
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Taipei, Taiwan
  12. Submitted by: Akira Sano
  13. Submitter Organization: BoxCluster (HPC Systems Inc.)
  1. Computer System: Altix XE1300 XE250/XE320
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox InfiniHost III Lx HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon X5272 DC 3.4Ghz, 1600MHz FSB, 800MHz D
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 2071
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300 XE250/XE320
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox InfiniHost III Lx HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon X5272 DC 3.4Ghz, 1600MHz FSB, 800MHz D
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1091
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300 XE250/XE320
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox InfiniHost III Lx HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon X5272 DC 3.4Ghz, 1600MHz FSB, 800MHz D
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 631
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300 XE250/XE320
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox InfiniHost III Lx HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon X5272 DC 3.4Ghz, 1600MHz FSB, 800MHz D
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 383
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300 XE250/XE320
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox InfiniHost III Lx HCA DDR Fabric OFED v1.3
    3. MPI Library: SGI MPT 1.20
    4. Processor: Intel Xeon X5272 DC 3.4Ghz, 1600MHz FSB, 800MHz D
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 5SP5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 276
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Mountain View, Ca
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Topspin 270 SDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon X5272 3.40GHz DL160
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 4.6
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 2249
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Scalable Computing & Infrastructure
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Topspin 270 SDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon X5272 3.40GHz DL160
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Red Hat EL 4.6
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 2249
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Scalable Computing & Infrastructure
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Topspin 270 SDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon X5272 3.40GHz DL160
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Red Hat EL 4.6
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1155
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Scalable Computing & Infrastructure
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Topspin 270 SDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon X5272 3.40GHz DL160
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Red Hat EL 4.6
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 605
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Scalable Computing & Infrastructure
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: CP3000
    1. Vendor: HP
    2. CPU Inerconnects: InfiniBand Topspin 270 SDR
    3. MPI Library: HP-MPI
    4. Processor: Intel Xeon X5272 3.40GHz DL160
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Red Hat EL 4.6
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971R3.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 365
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Scalable Computing & Infrastructure
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: HP
  1. Computer System: NEXXUS 4080ML
    1. Vendor: VXTECH
    2. CPU Inerconnects: Infiniband SDR
    3. MPI Library: OpenMPI 1.2.5 Xeon64
    4. Processor: Xeon 3130 3GHz Dual Core
    5. Number of nodes: 8
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Linux Red Hat 4 upd 4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971sR3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 586
  6. RAM per CPU: 8
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: St-Laurent, CANADA
  12. Submitted by: David Giorgi
  13. Submitter Organization: VXTECH
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1520
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 802
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 463
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 296
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 195
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Bull Novascale R422-E1
    1. Vendor: BULL
    2. CPU Inerconnects: Voltaire ConnectX IB Gen2 HCA DDR Fabric OFED v1.3
    3. MPI Library: hpmpi
    4. Processor: Intel Xeon E5462 2.80GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: Linux Bull Advanced Server 5v1.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 176
  6. RAM per CPU: 2
  7. RAM Bus Speed: 800
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: CA9212i
    1. Vendor: ARD
    2. CPU Inerconnects: DDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 920
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Cent OS 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 798
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Nagoya,Japan
  12. Submitted by: Takuya Ichikawa
  13. Submitter Organization: ARD
  1. Computer System: CA9212i
    1. Vendor: ARD
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: Intel i7 920
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Cent OS 5.1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.7600
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1565
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Shared
  11. Location: Nagoya,Japan
  12. Submitted by: Takuya Ichikawa
  13. Submitter Organization: ARD
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 249
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 451
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 806
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1572
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1560SF system
    1. Vendor: Intel
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 129
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel Stoakley server
    1. Vendor: Intel
    2. CPU Inerconnects: bus
    3. MPI Library: MPI 3.2.0.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1483
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Intel Stoakley server
    1. Vendor: Intel
    2. CPU Inerconnects: bus
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5482
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1483
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: QPI
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 838
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 455
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 257
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 168
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Board X8DTN qual
    1. Vendor: Intel
    2. CPU Inerconnects: IB
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5560
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 113
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Supermicro Nehalem Server
    1. Vendor: Intel
    2. CPU Inerconnects: QPI
    3. MPI Library: Intel MPI 3.2.011
    4. Processor: Intel® Xeon® Quad Core X5570
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 760
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: SSG/ASE
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 69
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 98
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 157
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 229
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo Boost E
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 411
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE8200EX
    1. Vendor: SGI
    2. CPU Inerconnects: IP95 Blades with Mellanox ConnectX IB HCA DDR Fabric OFED v1.4
    3. MPI Library: SGI MPT 1.23
    4. Processor: Intel® Xeon® Quad Core X5570 2.93GHz Turbo ON
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 512 (Total CPU)
    9. Operating System: SUSE Linux Enterprise Server 10, SGI ProPack 6SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 65
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1066
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: ThinkStation D20 w/ GigaE
    1. Vendor: Lenovo
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: OpenMPI 1.3.3
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: CentOS 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 731
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Beijing
  12. Submitted by: Jason Hu
  13. Submitter Organization: Lenovo
  1. Computer System: ThinkStation D20 w/ GigaE
    1. Vendor: Lenovo
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Intel MPI 3.2.2
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: CentOS 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 446
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Beijing
  12. Submitted by: Jason Hu
  13. Submitter Organization: Lenovo
  1. Computer System: ThinkStation D20 w/ GigaE
    1. Vendor: Lenovo
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: Intel MPI 3.2.2
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: CentOS 5.3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 339
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Beijing
  12. Submitted by: Jason Hu
  13. Submitter Organization: Lenovo
  1. Computer System: Z800
    1. Vendor: HP
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: openmpi-1.4.1
    4. Processor: Intel Xeon W5580 3.2GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: CentOS 5.4
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R4.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 1016
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Contern, Luxembourg
  12. Submitted by: Edmund Marx
  13. Submitter Organization: IEE S.A.
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Information Not Provided
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 218
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 356
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 639
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Intel SR1600UR system
    1. Vendor: Intel
    2. CPU Inerconnects: QDR IB
    3. MPI Library: Intel MPI 3.2.1
    4. Processor: Intel® Xeon® Six Core X5670
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: Redhat5U3
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 137
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Dupont, WA
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: bullx blade cluster
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: hpmpi
    4. Processor: Intel® Xeon® Quad Core X5560 @2.80GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: bullx cluster suite
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 803
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade cluster
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: hpmpi
    4. Processor: Intel® Xeon® Quad Core X5560 @2.80GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: bullx cluster suite
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 443
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade cluster
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: hpmpi
    4. Processor: Intel® Xeon® Quad Core X5560 @2.80GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: bullx cluster suite
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 259
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade cluster
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: hpmpi
    4. Processor: Intel® Xeon® Quad Core X5560 @2.80GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: bullx cluster suite
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 163
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade cluster
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: hpmpi
    4. Processor: Intel® Xeon® Quad Core X5560 @2.80GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: bullx cluster suite
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 108
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: Intel MPI 3.2.1.009
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 11
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 132 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 105
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: Intel MPI 4.0.0.027
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 10
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 120 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 111
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 125
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 197
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 334
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix XE1300
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies MT26428 ConnectX® IB QDR
    3. MPI Library: SGI MPT 2.02
    4. Processor: Intel® Xeon® Six Core X5670 2.93GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack(TM
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 586
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix UV10
    1. Vendor: SGI
    2. CPU Inerconnects: QPI
    3. MPI Library: Platform MPI 7.1
    4. Processor: Intel® Xeon® 8 core X7560 2.27GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 4
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack6SP
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 319
  6. RAM per CPU: 8
  7. RAM Bus Speed: 1067
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 568
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 322
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 183
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 113
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 144 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 83
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 21
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 252 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 62
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 42
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 504 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 53
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 84
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 1008 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 47
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 10
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 120 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 95
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 5
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 60 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 155
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 3
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 36 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 233
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.01
    4. Processor: Intel® Xeon® Six Core X5680 3.33GHz
    5. Number of nodes: 6
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 72 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11, SGI® ProPack
  2. Code Version: LS-DYNA
  3. Code Version Number: 971_s_R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 135
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2005-TY3
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Quad Core X5687 3.60GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1, SGI® Perfor
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 123
  6. RAM per CPU: 6
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Fall, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 63
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 78
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 117
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 188
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Rackable? C2112-4TY14
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR MT26428
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5675 3.07GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 580
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 96
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 1152 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 43
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 568
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 70
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 316
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 52
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 179
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 64
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 768 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 46
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Altix ICE 8400EX?
    1. Vendor: SGI
    2. CPU Inerconnects: Mellanox® Technologies ConnectX-2® IB QDR
    3. MPI Library: SGI MPT 2.03
    4. Processor: Intel® Xeon® Hexa Core X5690 3.47GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 6
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: SUSE® Linux® Enterprise Server 11 SP1
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 110
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1333
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Chippewa Falls, WI
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: Applications Engineering
  1. Computer System: Ci11T
    1. Vendor: ARD
    2. CPU Inerconnects: QDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 2700K
    5. Number of nodes: 3
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 6 (Total CPU)
    9. Operating System: Cent OS 5.5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R4.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 738
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: NAGOYA,JAPAN
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: Ci11T
    1. Vendor: ARD
    2. CPU Inerconnects: QDR InfiniBand
    3. MPI Library: HP-MPI
    4. Processor: Intel i7 2700K
    5. Number of nodes: 6
    6. Processors/Nodes: 1
    7. Cores Per Processor: 2
    8. #Nodes x #Processors per Node #Cores Per Processor = 12 (Total CPU)
    9. Operating System: Cent OS 5.5
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R4.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 405
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: NAGOYA,JAPAN
  12. Submitted by: ARD
  13. Submitter Organization: ARD
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 359
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 214
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 144
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 101
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: bullx blade system (B510)
    1. Vendor: BULL
    2. CPU Inerconnects: IB QDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz Turbo Enabled
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: bullx supercomputer suite AE R2.2
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 88
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: France
  12. Submitted by: Patrice Calegari
  13. Submitter Organization: BULL
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 64
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 12
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 75
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 128 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 87
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 6
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 107
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 64 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 139
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 3
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 176
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 32 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 213
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: Sandy Bridge-EP system
    1. Vendor: Intel
    2. CPU Inerconnects: IB FDR
    3. MPI Library: Intel MPI 4.0.3.008
    4. Processor: Intel® Xeon® E5-2680 @2.70GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 16 (Total CPU)
    9. Operating System: RHEL 6.1
  2. Code Version: LS-DYNA
  3. Code Version Number: mpp971.s.R321
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 374
  6. RAM per CPU: 2
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Swindon, UK
  12. Submitted by: Nick Meng
  13. Submitter Organization: Intel/SSG
  1. Computer System: SGI® Rackable CH-C2112 cluster
    1. Vendor: SGI®
    2. CPU Inerconnects: IB QDR
    3. MPI Library: SGI® MPI 2.07 beta
    4. Processor: Intel® Xeon® E5-2670 @2.60GHz Turbo Enabled
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 8
    8. #Nodes x #Processors per Node #Cores Per Processor = 256 (Total CPU)
    9. Operating System: SLES11 SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 60
  6. RAM per CPU: 8
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: USA
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: HPC Applications Support
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 640 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 46
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 320 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 55
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 160 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 72
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 80 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 113
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 40 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 194
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: Dell PowerEdge R720xd
    1. Vendor: Dell
    2. CPU Inerconnects: Mellanox Technologies Connect-IB FDR InfiniBand
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2680 V2 @2.80GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 20 (Total CPU)
    9. Operating System: RHEL 6.2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 326
  6. RAM per CPU: 4
  7. RAM Bus Speed: 1600
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Sunnyvale, CA
  12. Submitted by: Pak Lui
  13. Submitter Organization: HPC Advisory Council
  1. Computer System: SGI® ICE-X
    1. Vendor: SGI®
    2. CPU Inerconnects: IB FDR
    3. MPI Library: SGI® MPI 2.09-p11049
    4. Processor: Intel® Xeon® E5-2690 v2 @3.00GHz Turbo Enabled
    5. Number of nodes: 32
    6. Processors/Nodes: 2
    7. Cores Per Processor: 10
    8. #Nodes x #Processors per Node #Cores Per Processor = 640 (Total CPU)
    9. Operating System: SLES11 SP2
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971.R3.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 37
  6. RAM per CPU: 3
  7. RAM Bus Speed: 1866
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: USA
  12. Submitted by: Olivier Schreiber
  13. Submitter Organization: HPC Applications Support
  1. Computer System: HP ProLiant BL460c
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1.0
    4. Processor: Intel Xeon E5-2697 v2 @ 2.7 GHz Turbo On
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 326
  6. RAM per CPU: 5
  7. RAM Bus Speed: 1866
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: U.S.A.
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: High Performance Computing
  1. Computer System: HP ProLiant BL460c
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Information Not Provided
    4. Processor: Intel Xeon E5-2697 v2 @ 2.7 GHz Turbo On
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 197
  6. RAM per CPU: 5
  7. RAM Bus Speed: 1866
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: U.S.A.
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: High Performance Computing
  1. Computer System: HP ProLiant BL460c
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1.0
    4. Processor: Intel Xeon E5-2697 v2 @ 2.7 GHz Turbo On
    5. Number of nodes: 3
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 72 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 146
  6. RAM per CPU: 5
  7. RAM Bus Speed: 1866
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: U.S.A.
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: High Performance Computing
  1. Computer System: HP ProLiant BL460c
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1.0
    4. Processor: Intel Xeon E5-2697 v2 @ 2.7 GHz Turbo On
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 118
  6. RAM per CPU: 5
  7. RAM Bus Speed: 1866
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: U.S.A.
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: High Performance Computing
  1. Computer System: HP ProLiant BL460c
    1. Vendor: HP
    2. CPU Inerconnects: Mellanox IB FDR
    3. MPI Library: Platform MPI 9.1.0
    4. Processor: Intel Xeon E5-2697 v2 @ 2.7 GHz Turbo On
    5. Number of nodes: 5
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 120 (Total CPU)
    9. Operating System: RHEL 6.5
  2. Code Version: LS-DYNA
  3. Code Version Number: R7.1.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 103
  6. RAM per CPU: 5
  7. RAM Bus Speed: 1866
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: U.S.A.
  12. Submitted by: Yih-Yih Lin
  13. Submitter Organization: High Performance Computing
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 1
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 24 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 247
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 2
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 48 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 155
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 4
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 96 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 91
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 192 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 61
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: SBI-7228R-T2F/B10DRT
    1. Vendor: Super Micro Computer, Inc.
    2. CPU Inerconnects: Mellanox QDR IB
    3. MPI Library: Platform MPI 9.1
    4. Processor: Intel Xeon E5-2690 V3 @2.60GHz
    5. Number of nodes: 16
    6. Processors/Nodes: 2
    7. Cores Per Processor: 12
    8. #Nodes x #Processors per Node #Cores Per Processor = 384 (Total CPU)
    9. Operating System: RedHat Linux 6.6 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R7.1.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 51
  6. RAM per CPU: 3
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: San Jose
  12. Submitted by: Nihir Parikh
  13. Submitter Organization: System Performance Team
  1. Computer System: Blue Eyes
    1. Vendor: SIMWARE Inc.
    2. CPU Inerconnects: Gigabit Ethernet
    3. MPI Library: PlatformMPI
    4. Processor: Intel Xeon Processor E3-1231v3
    5. Number of nodes: 2
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 8 (Total CPU)
    9. Operating System: CentOS 6.7
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R8.0.0
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 915
  6. RAM per CPU: 4
  7. RAM Bus Speed: 2500
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Dedicated
  11. Location: Taipei,Taiwan
  12. Submitted by: Brian Hsiao
  13. Submitter Organization: SIMWARE Inc.
  1. Computer System: Celsius H730
    1. Vendor: Fujitsu Siemens
    2. CPU Inerconnects: Information Not Provided
    3. MPI Library: Information Not Provided
    4. Processor: i7-4610M
    5. Number of nodes: 1
    6. Processors/Nodes: 1
    7. Cores Per Processor: 4
    8. #Nodes x #Processors per Node #Cores Per Processor = 4 (Total CPU)
    9. Operating System: Windows 7 64-bit
  2. Code Version: LS-DYNA
  3. Code Version Number: ls971s R4.2.1
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 2183
  6. RAM per CPU: Information Not Provided
  7. RAM Bus Speed: Information Not Provided
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: SMP
  10. System Dedicated/Shared: Information Not Provided
  11. Location: Germany
  12. Submitted by: Sergio
  13. Submitter Organization: Fujitsu Siemens
  1. Computer System: FusionServer E9000, CH121 V3
    1. Vendor: Huawei
    2. CPU Inerconnects: Mellanox Technologies ConnectX-4 EDR InfiniBand
    3. MPI Library: Mellanox HPC-X v1.6
    4. Processor: Intel(R) Xeon(R) CPU E5-2680 v4@ 2.40GHz
    5. Number of nodes: 8
    6. Processors/Nodes: 2
    7. Cores Per Processor: 14
    8. #Nodes x #Processors per Node #Cores Per Processor = 224 (Total CPU)
    9. Operating System: CentOS Linux release 7.2.1511 (Core)
  2. Code Version: LS-DYNA
  3. Code Version Number: LS-DYNA R8.1.0
  4. Benchmark problem: neon_refined_revised
  5. Wall clock time: 51
  6. RAM per CPU: 8
  7. RAM Bus Speed: 2133
  8. Benchmark Run in Single or Double Precision: Single
  9. Benchmark Run SMP or MPP: MPP
  10. System Dedicated/Shared: Shared
  11. Location: Sunnyvale, CA
  12. Submitted by: Pengzhi Zhu
  13. Submitter Organization: HPC Advisory Council