• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Tachytelic.net

  • Sco Openserver
    • Sco Unix Support
    • SCO Openserver Installation ISOs
    • SCO Openserver Binaries
    • Add a Hard Drive to an Openserver 5 Virtual Machine
    • Install SCO Vision SQL-Retriever ODBC Driver on Windows 10
    • License Expired on Virtual SCO Openserver Installation
    • How to reset the root password on SCO Openserver 5
  • Scripting
    • PowerShell
      • Add leading zeros
      • Check if a File Exists
      • Grep with Powershell
      • Create Environment Variables
      • Test for open Ports
      • Append to a Text File
    • VBScript
      • Check if a File Exists
      • lpad and rpad functions
      • Windows Update E-Mail Notification
  • Office 365
    • Connect to Office 365 with PowerShell
    • Add or remove an email alias using Powershell
    • Change Primary email address of Active Directory user
    • How to hide an AD user from the Global Address List
    • How to hide mail contacts from the Global Address List
    • Change the primary email address for an account with PowerShell
    • Change Primary email address of an AD User
    • Grant a single user access to access to all calendars
    • Forward email to an external address using Powershell
    • Convert shared mailbox to user mailbox with Powershell
  • Get in Touch
  • About Me
    • Privacy Policy

How to install and boot a Dell PowerEdge from a PCIe NVMe drive

October 5, 2020 by Paulie 9 Comments

PCIe NVMe storage can provide an incredible speed boost to any server but booting from it is not natively supported on some older Dell PowerEdge servers.

11th and 12th generation servers like the Dell PowerEdge R710 and R720 are very popular amongst the home lab community and could benefit from a fast boot device.

This procedure should work on any Dell PowerEdge Server that can boot from a USB device.

Booting from NVMe storage is simple to do. In this post I am going to explain how it’s done and show the benchmarks from both a Dell PowerEdge R310 and T320.

Hardware you will need:

  • Two USB Flash drives:
    • One to run clover bootloader. I used this tiny Sandisk Ultra Fit Flash Drive.
    • One for your bootable Windows ISO.
  • A PCI NVMe Adapter and a NVMe Drive:
    • I used this cheap NVMe to PCIe adapter from Amazon.
    • With a Samsung 970 Evo Plus also from Amazon

I also tested the process on an 1.2Tb Intel DC P3520 PCIe card, which also worked fine.

Software you will need:

  • A Windows Server Installation ISO
  • Rufus to create the bootable Windows Installation.
  • Boot Disk Utility

PCIe NVMe Boot Process

When this procedure is complete, the PowerEdge server will boot from the internal USB storage and run the Clover EFI Bootloader. Clover will contain the NVMe boot driver and boot the installed operating system from the NVMe storage.

If your server has internal SD card storage, you could boot from that instead.

Install the NVMe Adapter and Drive

First install the NVMe adapter and drive into your Dell PowerEdge server. I used this cheap adapter from Amazon and a 500Gb Samsung 970 Evo Plus.

Here is the unit before I installed it into the server without the heatsink applied. It comes with regular and low profile PCIe bracket:

Samsung 970 Evo Plus NVMe SSD Installed into a PCIe adapter.

And here is the unit installed in the PowerEdge R310 with the heatsink and thermal pad applied:

PCI NVMe Adapter installed into an 11th Generation Dell PowerEdge Server

Create your bootable Windows Server Installation

The first step is to create your Windows Server Installation USB Stick. There are lots of guides on how to do this but I will show how I did it.

  • Download and Install Rufus.
  • Point Rufus to your Windows Server ISO.
  • Configure Rufus with the following options:
    • Partition Scheme: GPT
    • Target System: UEFI (non CSM)
      Image showing configuration of Rufus to make a Windows Server Bootable ISO

Install Windows in the normal way

Windows Server 2012 R2 and newer have Microsoft NVMe drivers built in, so it will see the NVMe storage and offer to install to that location.

When Windows setup is complete it will reboot. It will be unable to do because the Dell UEFI does not have any NVMe support. But don’t worry about that!

Setup the Clover EFI USB Boot Stick

Now setup the Clover USB Boot stick or SD Card.

  • Download and run Boot Disk Utility.
  • Insert the USB Stick that you are going to boot from into your PC.
  • Select your USB Stick and click format:
  • Open your newly formatted drive and copy \EFI\CLOVER\drivers\off\NvmExpressDxe.efi to:
    • \EFI\CLOVER\drivers\BIOS
    • \EFI\CLOVER\drivers\UEFI

Copying the NvmExpressDxe.efi to the drivers folder adds NVMe support to Clover which will enable booting from the Windows Installation that has just been completed.

My \EFI\CLOVER\drivers\UEFI looks like this:

Insert the Clover USB Flash Drive or SD Card into your server

Next simply insert the USB flash drive or SD Card into your server and set the UEFI boot order on the server to boot from it:

Sandisk USB Flash Drive installed into internal USB Port of a Dell PowerEdge Server to support booting of an NVMe drive.

Ensure your UEFI Boot order is set correctly and pointing to your Clover USB Stick or SD Card:

Image of Dell UEFI Boot Settings set to an Internal USB Device containing clover boot loader.

When booting from the internal Clover USB stick it will briefly display a boot screen:

Screenshot of the Clover Boot Manager.

The clover defaults worked right away for me and I didn’t have to configure anything.

You can modify the config.plist file (which is in the root of the USB Stick) to reduce the timeout if you want to speed things up a little bit:

<key>Boot</key>
<dict>
	<key>#Arguments</key>
	<string>slide=0 darkwake=0</string>
	<key>#DefaultLoader</key>
	<string>boot.efi</string>
	<key>#LegacyBiosDefaultEntry</key>
	<integer>0</integer>
	<key>#XMPDetection</key>
	<string>-1</string>
	<key>CustomLogo</key>
	<false/>
	<key>Debug</key>
	<false/>
	<key>DefaultVolume</key>
	<string>LastBootedVolume</string>
	<key>DisableCloverHotkeys</key>
	<false/>
	<key>Fast</key>
	<false/>
	<key>Legacy</key>
	<string>PBR</string>
	<key>NeverDoRecovery</key>
	<true/>
	<key>NeverHibernate</key>
	<false/>
	<key>RtcHibernateAware</key>
	<false/>
	<key>SignatureFixup</key>
	<false/>
	<key>SkipHibernateTimeout</key>
	<false/>
	<key>StrictHibernate</key>
	<false/>
	<key>Timeout</key>
	<integer>5</integer>
</dict>

Modify the “integer” value on line 36 to reduce the boot delay.

Windows should now proceed to boot normally directly from the NVMe drive.

Performance Results

I was really impressed with how much faster both machines PowerEdge are when booting from the NVMe drive. For the purposes of clarity the config of these systems are:

Dell PowerEdge R310
Intel XEON X3470 2.93GHz
16Gb Ram
Dell PERC H700 (512mb)

Dell PowerEdge T320
Intel XEON
32Gb Ram
Dell PERC H710 (1Gb)

Performance of the Samsung 970 Evo Plus NVMe Drive was excellent in both machines. But the drive performance is constrained in the R310 because it has a PCI Gen 2 x 4, whereas the T320 has PCI Gen 3.

Disabling C States in the BIOS increases performance in both machines.

Here are the results from a CrystalDiskMark from the R310 with C States Disabled:

Image of CrystalDiskMark showing performance results from an NVMe drive installed in a Dell PowerEdge Server with C States Disabled

Here are all the results from both machines with and without C States Enabled.

Test TypeC StateMachineRead Result (MB/s)Write Result (MB/s)
SEQ1M Q8T1EnabledR3101670.131636.13
SEQ1M Q8T1DisabledR3101811.271760.90
SEQ1M Q1T1EnabledR3101359.941346.70
SEQ1M Q1T1DisabledR3101529.411498.86
RND4K Q32T16EnabledR3101147.301351.01
RND4K Q3T16DisabledR3101149.571346.97
RND4K Q1T1EnabledR31035.9585.65
RND4K Q1T1DisabledR31037.9693.31
SEQ1M Q8T1EnabledR3203339.913124.31
SEQ1M Q8T1DisabledR3203576.683265.89
SEQ1M Q1T1EnabledR3202303.262510.44
SEQ1M Q1T1DisabledR3202421.822793.11
RND4K Q32T16EnabledR3201150.971557.42
RND4K Q32T16DisabledR3201145.251558.19
RND4K Q1T1EnabledR32033.23101.27
RND4K Q1T1DisabledR32043.98111.05

As a crude comparison here is the performance of a RAID 0 Array in the R310 comprising 4 x 7,200 RPM SATA Drives:

Image showing performance benchmark of a RAID 0 array on a Dell PowerEdge R310 with PERC H700 controller.

This R310 server also has a Samsung 860 EVO SSD in the DVD Drive bay, which is connected via a SATA 2 port on the motherboard:

Image showing performance of a Samsung SSD Installed in the optical drive bay of a Dell PowerEdge R310.

You can see the performance of the drive being constrained by the SATA2 port, but it still gives good random performance.

If you are using VMWare then you can just access the NVMe drive in the normal way if you are booting from a different storage device such as SD Card or USB Stick.

Conclusion – is it worth adding NVMe storage to a old Dell PowerEdge?

Given the low cost of both the adapter and Samsung SSD and the huge resulting performance boost, it is certainly worth experimenting.

I can’t say if I would use this setup in production yet, but so far, it seems to work fine. Here is an image of Samsung Magician Drive information:

Filed Under: How To Tagged With: Dell PowerEdge

Reader Interactions

Comments

  1. Jay says

    March 1, 2021 at 4:02 pm

    Great article. Could I use this process to get proxmox booting from NVME on R710?

  2. Paulie says

    March 1, 2021 at 4:18 pm

    No idea because I have never used Proxmox, but I can try it out for you if you want. I don’t see why it wouldn’t work (with any operating system).

  3. Jay says

    March 1, 2021 at 4:35 pm

    I would love to see you try. I’m currently in the process of acquiring the hardware for a R710 freenas server based on proxmox.

  4. Paulie says

    March 1, 2021 at 4:35 pm

    Have you already tried?

  5. Jay says

    March 1, 2021 at 4:39 pm

    No, the R710 is still in the shipping box. I’ve used the Cloverfield method before on an older server motherboard with mixed results. I’d like to know for sure before I order a PCI-express card and SSD drive.

  6. Paulie says

    March 1, 2021 at 6:22 pm

    Tried it, worked fine:
    https://ibb.co/kqS83r1
    https://ibb.co/85rkHdR

  7. Jay says

    March 1, 2021 at 7:25 pm

    Wow, that was fast, thanks! Should any pci-e NVME adaptor work? I’m looking at cards with one NVME and one SATA m.2 connector on one card. Because the R710 pci-e slots do not support bifurcation, I can’t use Gigabyte’s quad NVME adaptor, for instance.

  8. Paulie says

    March 1, 2021 at 7:30 pm

    I think it would be ok. Once you have tried it please let me know how it went. You could put a SATA SSD in place of the Optical drive. But the SATA on the PCIe card would be faster.

  9. Jay says

    March 1, 2021 at 7:42 pm

    Proxmox isn’t as disk-intensive as its VM’s, so it could live on a SATA SSD in the optical drive bay. Then an NVME in the pci-e slot would be free for VM’s and their operations. In that case I wouldn’t need Clover. Will keep you up to date. This project is backlogged, so it will be a few weeks before I have an update for you. I appreciate your effort with this concept, and especially for sharing it on your blog.

Leave a Reply Cancel reply

Primary Sidebar

Link to my LinkedIn Profile

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 107 other subscribers.

Go to mobile version