- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Multiple Virtual Machines Advice Needed
Posted on 10/4/24 at 11:58 am
Posted on 10/4/24 at 11:58 am
Open question to the board:
Looking to build a “server” or pc to support and run 6 to 8 Windows 11 Pro VM workstations.
VM PC use: Chrome and/or Edge, etc (4+ tabs open), Microsoft Office 365, Adobe, and a variety of other low use software (nothing graphic intense)
Here is what I’m thinking about building for a “server” to run this:
I9-12900k 16 Cores – 24 Threads
128 Gb DDR4 3200 Ram
4 – 1Tb Sata to SSDrives (on a raid as I will only need 3 and allow myself a drive to fail)
Additional 4-Port Gigabit network card
CPU Cooler, fans, etc..
Windows 11 Pro (will use Hyper-V to build out VMs)
6-8 Windows 11 Pro License for each VM
*Specs for the VM Windows 11 workstations would be ~12gb ram and ~256gb SSD+ per VM and not sure on CPU allocation.
Can I comfortably run 6 to 8 VM on the setup? (Outside folks would VPN to my firewall and then use Remote Desktop to sign into the VMs)
I know there are a lot of other questions and info that is needed but I’m trying to get a feel for the best way forward on this without having to buy a $10k server.
Or am I better off using a Xeon processor and going an actual server route? (Xeon W5-2465 Processor, ECC Ram, etc)
Thanks for any advice you guys can offer!
Looking to build a “server” or pc to support and run 6 to 8 Windows 11 Pro VM workstations.
VM PC use: Chrome and/or Edge, etc (4+ tabs open), Microsoft Office 365, Adobe, and a variety of other low use software (nothing graphic intense)
Here is what I’m thinking about building for a “server” to run this:
I9-12900k 16 Cores – 24 Threads
128 Gb DDR4 3200 Ram
4 – 1Tb Sata to SSDrives (on a raid as I will only need 3 and allow myself a drive to fail)
Additional 4-Port Gigabit network card
CPU Cooler, fans, etc..
Windows 11 Pro (will use Hyper-V to build out VMs)
6-8 Windows 11 Pro License for each VM
*Specs for the VM Windows 11 workstations would be ~12gb ram and ~256gb SSD+ per VM and not sure on CPU allocation.
Can I comfortably run 6 to 8 VM on the setup? (Outside folks would VPN to my firewall and then use Remote Desktop to sign into the VMs)
I know there are a lot of other questions and info that is needed but I’m trying to get a feel for the best way forward on this without having to buy a $10k server.
Or am I better off using a Xeon processor and going an actual server route? (Xeon W5-2465 Processor, ECC Ram, etc)
Thanks for any advice you guys can offer!
Posted on 10/4/24 at 7:07 pm to griddle
quote:
Can I comfortably run 6 to 8 VM on the setup?
The short answer is yes.
The CPU is overkill in my opinion. The workloads you described are not that CPU intensive.
128 GB of RAM is more than enough. I think a Windows 11 pro VM running that workload is going to use around 8 GB.
3 TB of storage sounds about right.
I don't think Hyper-V running on Windows 11 Pro can take advantage of multiple NICs. ETA: Do you even need four NICs? Your Internet connection is probably going to be the bottleneck.
I'm in the process of moving away from VMware. So, I've spent the last month evaluating different virtualization platforms. Hyper-V seems the least feature rich of all the platforms I tested. I went with Proxmox.
quote:
am I better off using a Xeon processor and going an actual server route? (Xeon W5-2465 Processor, ECC Ram, etc)
If they were doing financial transactions, I would. But for your purposes, I wouldn't bother. If those were 8 physical PCs none of them would be running ECC.
This post was edited on 10/4/24 at 7:27 pm
Posted on 10/4/24 at 7:16 pm to griddle
Elementary question, but is there a reason you’re not provisioning desktops through Azure? You could lock them to a North American MZR so it looks like the North Korean spies you’re employing to work for you as sub contractors are actually US based.
I know you didn’t mention network segmentation, but I’d be paranoid as hell having a bunch of subcontractors on my trusted network likely using credentials they’re probably sharing with other people to “cover” for them with RDP and potential traffic sniffing that can be conducted, etc.
I know you didn’t mention network segmentation, but I’d be paranoid as hell having a bunch of subcontractors on my trusted network likely using credentials they’re probably sharing with other people to “cover” for them with RDP and potential traffic sniffing that can be conducted, etc.
Posted on 10/4/24 at 7:41 pm to griddle
For the most part you should be okay for light use, only thing that stands out is you didn't mention a HBA.
You are going to be hurting for disk I/O for the VM's. 6-8 Win11 machines will absolutely bring your disk access to a crawl, during any events that require a lot of disk read/writes like patching, scans, boot/shutdown etc without a higher end HBA (with cache and preferably battery backup). If you start swapping to disk you are really done.
Using HyperV ? the above DEFINITELY applies in double. I'm not sure if it's an overall thing with Windows or just a crappy disk scheduler, but HyperV really suffers. I'm a Unix/Linux guy so tweaking on Windows' guts is a bit out of my realm. I did try this on a LOADED R840, with 12 instances of Win11 in HyperV was awful, even with staggered patching it was pretty much unusable for 2 days.
You are going to be hurting for disk I/O for the VM's. 6-8 Win11 machines will absolutely bring your disk access to a crawl, during any events that require a lot of disk read/writes like patching, scans, boot/shutdown etc without a higher end HBA (with cache and preferably battery backup). If you start swapping to disk you are really done.
Using HyperV ? the above DEFINITELY applies in double. I'm not sure if it's an overall thing with Windows or just a crappy disk scheduler, but HyperV really suffers. I'm a Unix/Linux guy so tweaking on Windows' guts is a bit out of my realm. I did try this on a LOADED R840, with 12 instances of Win11 in HyperV was awful, even with staggered patching it was pretty much unusable for 2 days.
Posted on 10/4/24 at 10:25 pm to TAMU-93
quote:
Proxmox
This. All day. Erry day.
When Broadcom (VMWare) basically gave the middle finger to enterprises that aren't running thousands of sockets, I started looking.
Proxmox was the winner. By a mile. I run several lightweight Linux VMs to monitor different networks at each of my plants. Each network has its own VM with its own firewall locked down.
Protectli makes a some nice appliances. Passively cooled, plenty of network connectivity.
Proxmox is solid.
Posted on 10/5/24 at 6:44 am to BoudreauxsCousin
quote:
This. All day. Erry day.
Love proxmox. Have to use VMWare at work, but I'm 100% proxmox in my lab
Posted on 10/5/24 at 8:38 am to bluebarracuda
Proxmox is awesome, though I've never run a windows VM on it. I'd imagine it has functioning VM guest integrations for it, though I've never looked.
Posted on 10/5/24 at 9:22 am to LSshoe
No issues with running windows on proxmox. I have a dell server running proxmox with a Windows VM running on it.
To the OP, I wouldnt use Hypervisor on Windows. It will run the VMs with no issue, but you can't pass USB devices to the VMs from the host.
To the OP, I wouldnt use Hypervisor on Windows. It will run the VMs with no issue, but you can't pass USB devices to the VMs from the host.
Posted on 10/5/24 at 9:53 am to mchias1
Yep, I've got Windows 10 LTSC, 11 and server 2019 VMs running in my proxmox environment
Posted on 10/5/24 at 12:50 pm to TAMU-93
First off, thanks for all the replies.
To answer a few of the questions:
1. Yesterday afternoon I did additional research and am realizing this may be more than I can tackle on my own. I'm a very low level IT guy and am realizing I may need to hire an actual IT pro to help facilitate my needs on the VMs.
2. The actual VMs will be used for Mineral/Surface Title research (landmen and abstractors). As for security or full access to my network, the VM will allow be to control who/what is accessed on my network vs having to map a network drive on their person pc.. Or am I flawed in this as well?
3. Not sure what HBA is or why I didn't mention it?? Will look into it.
4. An IT buddy mention provisioning desktops through Azure but again, this went a bit beyond my IT understanding will need to research it more. My understanding is it would be better to build VMs from a box than through Azure.
5. What I did conclude is I can build a "server" with a Xeon W5 16 Core Processor, DDR5 Ram, SSDs for VMs a M.2 for main pc boot, all in a regular pc workstation environment for ~$550 per vm. This allows me to scale up to 10vm from one box..
6. I will do some research on Proxmox and keep doing appropriate research before pulling the trigger.
Again, thanks for the feedback and not making me feel too stupid about doing this!
To answer a few of the questions:
1. Yesterday afternoon I did additional research and am realizing this may be more than I can tackle on my own. I'm a very low level IT guy and am realizing I may need to hire an actual IT pro to help facilitate my needs on the VMs.
2. The actual VMs will be used for Mineral/Surface Title research (landmen and abstractors). As for security or full access to my network, the VM will allow be to control who/what is accessed on my network vs having to map a network drive on their person pc.. Or am I flawed in this as well?
3. Not sure what HBA is or why I didn't mention it?? Will look into it.
4. An IT buddy mention provisioning desktops through Azure but again, this went a bit beyond my IT understanding will need to research it more. My understanding is it would be better to build VMs from a box than through Azure.
5. What I did conclude is I can build a "server" with a Xeon W5 16 Core Processor, DDR5 Ram, SSDs for VMs a M.2 for main pc boot, all in a regular pc workstation environment for ~$550 per vm. This allows me to scale up to 10vm from one box..
6. I will do some research on Proxmox and keep doing appropriate research before pulling the trigger.
Again, thanks for the feedback and not making me feel too stupid about doing this!
Posted on 10/6/24 at 12:01 pm to griddle
Without having redundancy, you run the risk of having a significant multi-user outage that can last days. And if you pay for same day hardware support and replacement, your maintenance contract costs increase exponentially.
For this small number of users, I would recommend using desktop as a service (DaaS) rather than building your own host and trying to scale. You then drive down your hardware and support costs to near zero. You also eliminate your learning curve for the hypervisor as well.
For this small number of users, I would recommend using desktop as a service (DaaS) rather than building your own host and trying to scale. You then drive down your hardware and support costs to near zero. You also eliminate your learning curve for the hypervisor as well.
Posted on 10/6/24 at 6:06 pm to griddle
Make life easy and just run HyperV with the 8 VMs. We have a similar setup at a few locations and it works fine. No one complains, Dell server with RAID10 and a bunch of memory.
Posted on 10/7/24 at 8:09 am to ColdDuck
Thanks for the reply. The IT world, as with most things, has a lot of ways to get things done.
I'm still leaning to using this same setup idea you suggested but it's always nice to get feedback from others.
I'm a keep it simple kind of guy, so we will see.
I'm still leaning to using this same setup idea you suggested but it's always nice to get feedback from others.
I'm a keep it simple kind of guy, so we will see.
Popular
Back to top
