VMware vSphere on DELL PowerEdge VRTX

In this article i would like to go through the minimum steps to have hypervisors running. My aim is not to fully introduce the chassis and complete all of the required steps of a production system.

VRTX could operate as tower or rack server, and this particular one came with built-in shared-storage of 25×2.5 SSDs and four M640 blades.

As you can see, we have 13 SSDs populated: 1 spare, 3×4 for RAID 5 datastores. This build in storage rack, works as a shared storage for the ESXI hosts installed on the Blades.

Main tasks of this configuration:

  • Console Access (iDRAC)
  • Network
  • Shared Storage

Console

To access the iDRAC interface we need ethernet cables plugged into the console ports. You can go through KVM, but we choose network access, instead manage it locally in the server room.

Before we move forward, it’s good to know that chassis could get ip through DHCP, or we can configure it manually in the Advanced section of network settings in the Settings Menu. Once we have the cables plugged, let’s check the small display on the front for IP.

As you can see below, there are different options in the menu, like the enclosure, where we can check the hardware tag, actual power consumption, errors, etc…

You can see we got IP, which can be checked in the IP summary menu.

Now we can log in to the iDRAC interface, where we could see the standard Dell lookout:

We have many options to set up, i don’t think there are differences to other products. Standard settings are available, date/time, authentication / administration, licenses, logs, logservers, maintenance options, troubleshooting, update and a lot more. We have feedback from every component of the hardware temperatures, rpms, storage capacity, etc.. I could spend a day to create a list of the available options and features.

To deploy our first hypervisor, we need to have access to the blades to attach the images, we need network and storage to operate.

Select the first blade and click on Setup:

On the Setup screen, click on the checkbox and Enable LAN. Choose IPv4 or IPv6, and fill out the form or enable DHCP by checking its box.

In case of manual setup we need the following:

  • IP Address
  • Subnet Mask
  • Gateway
  • Preferred and Alternate DNS Servers

Once the data has been entered, click on Apply iDRAC Network Settings in the bottom right corner.

After a few minutes the iDRAC has IP which can be checked on Properties tab. Once we have it, we can open a new tab in the browser and copy the ip, or just click on the Launch iDRAC GUI button. There is a direct way to access to the Remote Console too, which needs an authentication to the iDRAC GUI.

If we are in the GUI or just launched the console directly, we can attach the boot media, our ESXi Dell image. But before we do that we have to create the shared storage and configure the network if necessary.

Network

We have an R1-2210 VRTX 10GB module attached and we have trunk ports at the endpoints, therefore we don’t need to configure anything on the controller, it works as a standard switch.

The I/O module has its own management UI as well, which needs the same configuration, DHCP or Manual with all of the required details. Both can be set up on the Setup tab in the I/O Module Overview menu. On the properties page we can check the settings and open the GUI, which looks like:

Storage

To have shared storage across the blades, we have to configure it. One of the most important thing is to change the Assignment mode. Click on Storage » Setup. Assignment mode selector will be on the bottom, select Multiple Assignment and click on Apply.

On the same page we should check the Virtual Adapters, are they mapped properly to the server slots. Probably they are, just worth to check.

Next step is creating a virtual disk. It’s simple and easy. Click on Storage » Virtual Disk. We will see a message as Virtual Disks are not created yet. Let’s create one. Click on Create. Options are in three main section,

  1. Raid Type selection.
  2. Physical disk selection
  3. Configure Settings

Choose as it is in your design, like Raid 5 with 4 physical disks. Enter the name, and you can leave the rest default. Wait a couple of minutes, then the virtual disk is ready. Repeat the steps as many virtual as disks need. Once its complete, virtual disk must be assigned to the blades. Click on Storage » Virtual Disk » Assign. Change from No Access to Full Access for every Blade you want.

Boot

Now, we have the storage attached, it’s time to start a Blade. Click on Chassis Overview » Server Overview » SLOT-01 » Setup » First boot device. In the first section we already gave IP address to iDRAC, if it’s not done yet, you can do it in the iDRAC menu. In our configuration we have dual SD card attached, therefore the first boot device should be Local SD Card. Select it. Once it’s done, don’t forget to untick the Boot Once checkbox as we want to boot every time from the same device. Click on Apply.

Now, we are done, with the storage, we have IP on iDRAC, there is nothing left but booting the deviec. Jump to Power tab and select Power On Server, then click on Apply.

Open the console as described before, and check the boot process.

Takes some time but we will see esxi booting (if you have pre-installed). If not, ISO image can be attached to the console. Just click on Connect Virtual Media. Select the required ISO and start the installation. From this point everything is same as during a standard ESXI installation.

Once the installation is complete, and/or the ESXi booted, the shared virtual storage can be attached:

I hope this was helpful in work with VRTX.