Worldwide TecHub FAQ – Update November 2022


COMPUTE


Q1. We configured an HPE ProLiant DL380 Gen10 Plus with U.3 backplanes. Our customer has a lot of U.2 SSDs as spares from an old HPE ProLiant DL380 Gen10 Server, which he wants to use. Are U.2 SSDs supported? If not, what can we do to use old SSDs in the new server?
A1. The U.3 drive backplane lists only U.3 drives as supported. With the exception of the newest U.3 ST drives, U.2 drive cages list both U.2 as well as U.3 drives. However, to confirm the compatibility and if the drive was properly certified for the server, please refer to the Product Bulletin. Note that if a customer chooses to use unsupported drives, then they do so at their own risk; should any issue arise, HPE will not offer support.

 

Q2. We would like to configure an HPE ProLiant DL360 Gen10 Plus with 512 GB RAM, 2x Intel 6338, for OS the NS204, other drives 8 x 1, 8 GB SSD, NIC 4 port 25 G, 4 port 10G, and 1 port 1 G. However, we are experiencing an error. Would you be able to assist?
A2. Trying to replicate the configuration in OCA with the PCIe version of the NS204 resulted in an error. However, after changing it to the NS204 riser, the error no longer appeared. The error seems to point to an issue between the PCIe version NS204 and the two 4-port NICs (4p 10G SFP+ and 4p 10/25G SFP28).

 

Q3. I’m getting conflicting information between OCA and QuickSpecs related to the HPE ProLiant DL20 Gen10 Plus and the 290W power supply. The 290W power supply is described as: “The HPE 290W Non-redundant Power Supply is the standard, non-redundant AC power supply and is optimized for the DL20 Gen10 Plus rack server” but in OCA we can select 2 x 290W PS into an HPE ProLiant DL20 Gen10 Plus. Can you please clarify?
A3. The 290W PS has different physical dimensions than the redundant type of power supplies. It is physically not possible to fit two 290W power supplies into the HPE ProLiant DL20 Gen10 Plus. When we tried to test this in OCA with a random HPE ProLiant DL20 Gen10 Plus configuration, it did allow us initially to add two 290W power supplies. However, when we then ran the CLIC check, we received an unbuildable error message that told us we could select only one 290W power supply. If you want us to review your OCA configuration, please provide its UCID and the valid OPP ID.

 

Q4. In order to save 1U slot within a rack, the customer wants to install the HPE Standard Analog KVM (from ATEN) behind the KVM console LCD8500 like with other HPE KVM switches (G2, G3, or G4). Is it possible?
A4. From a technical perspective, installing the HPE Standard Analog KVM (ATEN) directly behind the LCD8500 KVM console is not possible. For HPE G2, G3, and G4 KVM switches, their mounting brackets are designed to fit into the rail rackmount kit of the HPE KVM console LCD8500; however, the ear brackets included with the HPE Standard Analog KVM (ATEN) have to be screwed directly on the posts of the rack by two screws per side. Unfortunately, the threaded holes on the LCD8500 rail don’t align with any of the holes on the ear brackets of the HPE Standard Analog KVM. For that reason, the LCD8500 and the HPE Standard Analog KVM switch cannot be installed together on the same 1U room in the rack.

 

Q5. Our customer has some 42U 600x1200mm racks from the HPE G2 Enterprise rack series. Is it possible to use the ARCS cooling unit with them?
A5. HPE G2 Enterprise/Advanced series racks are different from the ones used in the HPE Adaptive Rack Cooling System; thus, it is not possible to associate the standard rack with the ARCS cooling unit (cabinet for ARCS has different depth, their doors are not perforated,etc.).

 


STORAGE


Q6. Can HPE StoreEasy 1×60 have >800 GB of fast cache?
A6.Yes, SmartCache feature (to use SSD as fast cache for HDD) is supported on SE1x60. SmartCache LTU is included on SE 1660/1660Expanded/1860 and is optional on SE1460/1560 or for externally attached JBODs. You can use any of the three preselected SSD SKUs:

• P37013-K21 HPE 1.92TB SAS 12G Mixed Use LFF SCC Value SAS Multi Vendor SSD
• P47419-K21 HPE 960GB SATA 6G Mixed Use LFF SCC Multi Vendor SSD
• P47807-K21 HPE 480GB SATA 6G Read Intensive LFF SCC Multi Vendor SSD

Or any of the HPE ProLiant-based Smart Carrier SC SFF drives (SE1860) or SCC LFF drives SE1660/1660Exp/1460/1560 are supported. Preferably, use two or more 12 Gb SAS Mixed Use SSD drives with RAID1/5/10 for cache protection. SmartCache can work with a single SSD drive without redundancy. Refer to the HPE Smart Array SR SmartCache documentation for more details.

 

Q7. Does HPE Alletra 9000 support IPv6 for management and iSCSI?
A7. Yes, IPv6 is supported. For management, there is the choice to set up IPv4 only or IPv4 and IPv6 addresses. For iSCSI ports, each port can be configured for IPv4 or IPv6. Watch this short demonstration video: HPE Alletra 9000 – Configuring Management Network (IPv6 Settings) (00:02:27).

 

Q8. Does HPE Alletra 9000 support LDAP for user authentication?
A8. Yes, HPE Alletra 9000 storage systems support the following LDAP servers:

• Microsoft Active Directory
• Open LDAP
• Red Hat® Directory Server

See the HPE Alletra 9000: UI 1.4 User Guide – LDAP for more information.

 

Q9. Does HPE Alletra Storage support the Prometheus dashboard and Grafana visualizations?
A9. Yes, an exporter for Prometheus is published on the HPE Storage github. For more information, see the links to the exporter and the article:

• HPE CSI Driver for Kubernetes enhancements with monitoring and alerting
• Grafana Dashboards

 

Q10. Does HPE Alletra 9000 support 10/25 Adapter ports for replication traffic? What ports are possible to
use for replication?
A10. HPE Alletra 9000 requires for its IP-based replication to use only 10GBaseT embedded ports. It’s possible to use BaseT transceiver for switches with SFP+ ports only.

 

Q11. Which AOC cables are supported with HPE Alletra 6000?
A11. There are no Active Optical Cables (AOC) supported on HPE Alletra 6000. Please refer to the HPE Alletra 6000 QuickSpecs for a list of supported DAC cables. If a distance longer than 3 meters is required, transceivers and optical cables should be used.

 


NETWORKING


Q12. Do the Aruba 6300M switches support any splitting of the 25/50 Gb port functionality, or are they only for QSFP+ and QSFP28 modules on different models?
A12. 25 G/50 G ports cannot be split. You can split 100 G to 4x 25 G or 40 G to 4×10 G on models that support it. 25 G/50 G cannot be split to any usable speed logically.

 

Q13. Do you plan to add support for Aruba Instant On 1960 switches to any Aruba switch management software, like Central, Airwave, or IMC?
A13. Today, the only official management option for Aruba Instant On 1960 is through the Aruba Instant On application/web interface. IMC could monitor it after proper MIBs are added, but there is no official support for this platform.

 

Q14. I just installed the Iris Config tool, and we are creating a configuration. Can you check it out and let us know if it has any trouble? Do you have a tutorial or user guide for Iris? This is our first time using it.
A14. Yes, send us your Iris configuration file; we will check any inconsistences and try to add some comments in our answer if needed. Regarding the Iris Config tool and how to work with it, please refer to the IRIS BOM Tutorial (Arubapedia for Partners). This tutorial will help you set it up correctly and guide you to create a BOM. It also includes an example of a basic design as well as tips.

 

Q15. Some of the new Aruba 6300M switches support BC clock. Will a VSF stack with BC clock-capable models and non- capable switches be able to support the BC clock feature?
A15. No, customers will get the lowest common denominator; in this case, if you have a 6300M switch in the stack that doesn’t support BC clock, the VSF stack will only support transparent clock. Full stack members must support TC or BC to operate this.

 

Q16. Does our HPE FlexNetwork 5140 EI 8 G 2SFP 2GT combo EI model – PN R8J42A support IRF through 1 G ports? This model does not have 10 G ports.
A16. Yes, the new HPE FlexNetwork 5140 EI 8G 2SFP 2GT PN R8J42A supports IRF in 1 G ports. You can confirm it in our HPE FlexNetwork 5140 EI Switch Series Installation Guide, Table 9. Which ports can be used for IRF physical ports? All ports on the front panel 10/100/1000BASE-T autosensing ports and SFP ports (the port must operate at 1 Gbps.).

 

Q17. Can I set up the same frequency band on both bands of 610 series AP? For example, 2.4GHz on both bands?
A17. No, this is not supported. You need to choose a combination of two from three supported bands, 2.4GHz, 5GHz, or 6GHz. In the Flexible Dual Band list box, choose one of the following options:

1. Radio 0: 5 GHz Radio 1: 2.4 GHz—Radio 0 operates in the 5 GHz band and radio 1 operates in the 2.4 GHz band. This is the default
setting.
2. Radio 0: 5 GHz Radio 1: 6 GHz—Radio 0 operates in the 5 GHz band and radio 1 operates in the 6 GHz band.
3. Radio 0: 2.4 GHz Radio 1: 6 GHz—Radio 0 operates in the 2.4 GHz band and radio 1 operates in the 6 GHz band.

 


HYBRID CLOUD


Q18. Can we change replicate type after deployment?
A18. No, you have to decide if you deploy single-replica datastore or dual-replica datastore; because, after deployment, you cannot change.

 

Q19. How can I collect data from a VMware® and non-VMware environment and prepare HPE SimpliVity sizing?
A19. HPE CloudPhysics can be used for VMware environments and HPE Assessment Foundry can be used for non-VMware environments. To learn more, please review the Lanamark transition for HPE SimpliVity assessments frequently asked questions.

 

Q20. Could you please clarify which versions of VMware ESXi and Windows Server operating systems are supported with the HC250?
A20. Please check these two resources:

• HPE Hyper Converged 250 System for VMware vSphere
• HPE Hyper Converged 250 System – Product Information Reference

 

Q21. Where can I find which HPE servers support the single node Azure Stack HCI cluster deployment?
Q21. You can find a list of all supported server configurations on the Microsoft Azure Stack HCI Catalog.

 

Q22. Which HPE servers support the Azure Stack HCI deployment with GPU module?
A22. Currently four servers are supported with GPU for AS HCI deployment: HPE ProLiant DL380 Gen10, HPE ProLiant DL380 Gen10 Plus, HPE ProLiant DL385 Gen10 Plus, and HPE ProLiant DL385 Gen10 Plus v2. Please find more information on the Microsoft Azure Stack HCI Catalog pages.

 


HPE Alliances: Marvell®


Q23. How does Marvell ensure security in their HPE Host Bus Adapters (HBAs)?
A23. To complement HPE server security strategy, Marvell introduced a Secure Firmware update and hardware root of trust into the latest generation of Fibre Channel Adapters.
There is a digital key built into the ASIC for new adapters like the SN1610Q. Firmware updates must be authenticated against this key in the silicon. Without a proper security signature, firmware downloads will not be allowed. This ensures that only trusted and certified firmware can be downloaded, eliminating threats from rouge firmware.

 

Q24. Marvell claims to enable an intelligent Fibre Channel SAN infrastructure with StorFusionTM. What is it, and how can customers benefit from it?
A24. Marvell StorFusion is a collection of orchestration and diagnostics features and capabilities that come along with all 16 Gb and 32 Gb QLogic FC HBAs. For diagnostics, Marvell offers D-Port, Read Diagnostic Parameter (RDP), and Link Cable Beaconing (LCB) features. When using these capabilities, customers can easily identify and isolate optics and cable problems. As a result, customers can maintain predictable application performance. For simplified management across the SAN, these Marvell QLogic HBA features are seamlessly integrated into SAN management platforms from Brocade and Cisco as well as with HPE SmartSAN for 3PAR, HPE Network Orchestrator, and HPE StorageWorks Fabric Manager.

 


HPE Alliances: NVIDIA®


Q25. What are some of the considerations for sizing GPU infrastructure in High Performance Compute (HPC)? Do more GPUs always equate to linear performance gain?
A25. Sizing comes down to two factors—scaling ability and throughput. Some customers will be looking to get as much efficiency from their infrastructure as possible and will focus on throughput. However, it all comes back to their workload and their aims. For example, some customers will be running lots of small simulations as different jobs. In this scenario, you may find your optimal cost/performance/density solution is to leverage many “run of the mill” smaller GPU-enabled servers. There are also some packages that are unable to scale beyond a single physical node; for those applications, you want to pack as much GPU and performance into a single node.

This also becomes a balancing act, as you want to ensure that you can feed enough data into this server to keep it busy. Do not forget the network or storage performance. Our ideal is to have one NIC per GPU. This produces a system that is perfectly balanced and makes it possible to run scale-out simulations as well as single-node throughput jobs.

Other applications can scale out over multiple nodes and use multiple GPUs all for the same job. When scaling is important, technology like GPU Direct over RDMA (Remote Direct Memory Access) will enable GPUs to access the memory space of other GPUs—both within the same node and across the network.

Regarding linear performance, it boils down to your application and how much it is bounded by the famous Amdahl law. In fact, there are elements that can be run in parallel and elements that must be run in a serial fashion. The parallelization is where we can have the most impact but its not always a linear performance gain. It all depends on how able the application code is to scale over GPUs within the same node and leverage GPUs across many nodes.

 

Q26. What is the difference between FP64, FP32, BFloat-16, and TF32?
A26. These numbers relate to the precision and range that a Floating-Point number can be represented in. For example, FP64 uses 64 bits to represent floating point numbers. It can represent a much higher degree of precision than any of the other types listed above. FP64 is much more likely to be used in High Performance Compute (HPC) applications, such as quantum chemistry simulations or climate modeling, where the degree of accuracy has a direct impact on the results. Although not in the overall diagram below, FP64 numbers are made up of the following.

 

In fact, if you need to use FP64, you have to use one of the NVIDIA data science/HPC GPUs such as A100 and A30, which are specially designed to be able to handle FP64 numbers.

Artificial Intelligence (AI), however, does not require the same level of precision or range. As such, FP32 is the de-facto standard for models; however, data scientists are encouraged to explore lower precisions and take advantage of the GPU tensor cores which are specifically designed to accelerate AI applications.

In an effort to enhance speed with minimal loss to accuracy, NVIDIA has developed TF32 or TensorFloat32. TF32 has the same precision as FP16 which we have observed to provide adequate precision for the majority of AI models. Note that TF32 preserves the same exponent range of FP32 to ensure it can still represent the same range of numbers. Tensorflow and pytorch support natively TF32, accelerating AI training times when possible.

Note that FP64 is sometimes referred to as double precision, FP32 as single precision, and FP16 as half precision.

 


About TecHub


Worldwide TecHub provides centralized Presales support for Platinum and Gold Partners as well as Distributors and System Integrators. We

support all value businesses, providing technical expertise, architectural guidance, and high-quality design of HPE as-a-service, edge-to-
cloud solutions. More than 70 countries use our services monthly across the UK&I, MESA, DACH, NWE, CERTA, Northern & Southern Europe

geographies (formerly EMEA). You can find additional information about the service we provide on our HPE Worldwide TecHub page in the
Partner Ready Portal. To engage Worldwide TecHub, follow these simple steps:

HPE internal users
• Click the “New Case” button in the “Opportunity details” section of your Opportunity page in SFDC
• Select “TecHub Support” and press “Continue.”
• Fill in the required form fields and “Save” them to submit your case.

Channel Partners
• Click the “Get Support” button on the right-hand side of the page after you have successfully logged into the portal.
• Select “Presales Services,” then specify which service you require (for example, Solution Design, Technical Q&A).
• Select “Submit a support case” and fill in the request form.
• In the “My Cases” section. you can view your case status and case history or add comments and attachments.

Service Portfolio

 

TecHub news and updates
Deliverables change
Please note that our standard TecHub services are as follows:
• Technical Q&A and Solution Design & Sizing services are available only in English
• TAT is two business days for all deliverables
• A valid Opportunity ID is required to process all cases, so all our requestors need to register their deals before engaging with TecHub (in
IQ 2.0 for Platinum / Gold Channel Partners and Distributors, or Salesforce.com for HPE employees)

As a reminder to our Channel Partners and Distributors, it is possible to update the value of an Opportunity in IQ 2.0 by manually adding a
product group to the opportunity. This allows the opportunity to have a value attached even before attaching a configuration. For more
information, watch this Demo Video 3 – Add products and services via Product Groups.

OCA plugin
WW TecHub is proud to offer a plugin for OCA, designed and developed by TecHub, to provide a faster, simpler, and tailored user
experience. This new plugin offers a wealth of new benefits including:
• Effective screen usage and appearance
• Streamlined scroll bars, headers, and footers
• Riser categorization for Gen10 servers
• 1-click export of OCA file, XML file, an Alternative Spreadsheet export, and SDD (hold shift)
• Alternative Spreadsheet that is rich formatted, has in-built CLIC report and allows adding discounts via formulas
• Easy-to-read Quote Summary
• Option to disable graphical transition effects, increasing productivity
You can find all the installation instructions, FAQs, and support contacts in the HPE Tech Pro forum at WW TecHub – GT Edition plugin for
OCA.

Power estimation deliverable
As of December 1, 2022, WW TecHub will limit the Power estimation deliverable to deals with a value of more than 1M or to RFPs that WW
TecHub will commit to fully support. In any other cases, the requestor will need to use the HPE Power Advisor tool.