PCSpecialist - Hyper-V

PaxPat

Member
Processor (CPU)
Intel® Core™ i5 10-Core Processor i5-13400 (Up to 4.6GHz) 20MB Cache
Motherboard
ASUS® STRIX B760-I GAMING WIFI (Mini-ITX, LGA1700, DDR5, PCIe 5.0, Wi-Fi 6E)
Memory (RAM)
32GB PCS PRO DDR5 4800MHz (2 x 16GB)
Graphics Card
INTEGRATED GRAPHICS ACCELERATOR (GPU)
1st M.2 SSD Drive
2TB SOLIDIGM P41+ GEN 4 M.2 NVMe PCIe SSD (up to 4125MB/sR, 3325MB/sW)
Power Supply
CORSAIR 450W CV SERIES™ CV-450 POWER SUPPLY
Power Cable
1 x 1.5 Metre UK Power Cable (Kettle Lead)
Processor Cooling
STANDARD CPU COOLER LGA1700
Thermal Paste
STANDARD THERMAL PASTE FOR SUFFICIENT COOLING
Sound Card
ONBOARD 6 CHANNEL (5.1) HIGH DEF AUDIO (AS STANDARD)
Network Card
10/100/1000 GIGABIT LAN PORT
Wireless Network Card
GIGABIT LAN PORT + Wi-Fi
USB/Thunderbolt Options
MIN. 2 x USB 3.0 & 4 x USB 2.0 PORTS @ BACK PANEL + MIN. 2 FRONT PORTS

Hi People,
Recently acquired this spec as a replacement for an old HP Microserver. Its got very good performance for what I need but it seems to an issue running Hyper-V on Windows 11. I'm aware that 12th Gen Intel CPUs had a few reported issues with the whole P+E core situation along with Windows 10 (or Server 2019) but my understanding was Windows 11 (server 2022) and 13th Gen Intel CPU faired alot better.

The symptoms are the Hyper-V seems to freeze up causing the VM's to hang when trying to either Start up or Shutdown. At this point, Stop/Start Services fails and even the host Windows 11 fails to reboot and needs a hard reboot.


Done alot of research already but nothing exactly fits the high level symptoms I mentioned.

Was wondering if anyone has seen or experienced the same type of issues and knows of a good BIOS config that should be used with Hyper-V.

Tried changing C-States, Speed Step and Speed Shift and even disabling all the E Cores, but no coconut.

Only thing left to try is reinstalling the Hyper-V role but no betting on that working since its baked into the OS DISM.

Any suggestions would be appreciated.

Cheers
 

SpyderTracks

We love you Ukraine
It depends how you've structured the VM's. As I understand it, the e cores are essentially split p cores, so 1 p core is 2 e cores in a crude way. And the e cores have no hyper threading. So essentially if you're assigning one V-CPU to a virtual server, and you've run out of P cores, then you need to assign 2 x e cores to result in one V-CPU

So that CPU is 6 p cores = 12 v-cpus + 4 e cores (2 per V-CPU) = 14 V-CPUs effectively - 4 for the OS so you’ve essentially got 10 v cores available for vms.

This reddit thread may help explain it better


With regards to BIOS, you just need virtualisation enabled, there's nothing more to it than that, it's either working or not

 
Last edited:

PaxPat

Member
Thanks Spyder,
That Reddit post was one of the ones I came across early on. Noticed the OP assigned all available cores from the Host to the VM in the test so that makes sense. There's another person who's running multiple VMs with a series of 2P or 4E cores per VM and running into Activation issues due to the OS thinking the HW has changed etc.

Also VT was enabled by default anyway.

Thanks for the assist anyway ;)
 

PaxPat

Member
Found the culprit. Doubt it will help anyone but you never know.

The motherboard (above) has a built in "Intel Wi-Fi 6E AX211" adapter which originally planned to use for the dedicated Hyper-V Switch binding. This adapter works fine for downloads but is slow for uploads (tested 5-6 GB Windows ISO file transfers on the LAN), so didn't use it. Therefore I had a spare USB-C adapter laying around which has a Gig-NIC I tried as a workaround.

It appears that USB-C Adapter goes to sleep or some lower power state, even though I've enabled High Performance profile in powercfg.cpl and changed the USB settings to never sleep.
 
Last edited by a moderator:
Top